DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/6] Crypto API and device framework
@ 2015-10-02 23:01 Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                   ` (7 more replies)
  0 siblings, 8 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.

Declan Doherty (5):
  cryptodev: Initial DPDK Crypto APIs and device framework release
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  docs: add getting started guides for multi-buffer pmd and qat pmd
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

John Griffin (1):
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.

 app/test/Makefile                                  |    3 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1993 ++++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 1415 ++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 config/common_bsdapp                               |   23 +-
 config/common_linuxapp                             |   30 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  155 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   67 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  206 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  632 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  295 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  210 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    5 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  173 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 ++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  576 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  505 +++++
 drivers/crypto/qat/qat_crypto.h                    |  111 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  372 ++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    5 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  130 ++
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1475 +++++++++++++++
 lib/Makefile                                       |    1 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  720 +++++++
 lib/librte_cryptodev/rte_crypto_version.map        |   40 +
 lib/librte_cryptodev/rte_cryptodev.c               | 1126 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  592 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  577 ++++++
 lib/librte_eal/common/include/rte_common.h         |   15 +
 lib/librte_eal/common/include/rte_eal.h            |   14 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |   30 -
 lib/librte_mbuf/rte_mbuf.c                         |    1 +
 lib/librte_mbuf/rte_mbuf.h                         |   53 +-
 mk/rte.app.mk                                      |    8 +
 54 files changed, 13263 insertions(+), 80 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_crypto_version.map
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h

-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
@ 2015-10-02 23:01 ` Declan Doherty
  2015-10-21  9:24   ` Thomas Monjalon
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 2/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                        |    7 +
 config/common_linuxapp                      |   10 +-
 doc/api/doxy-api-index.md                   |    1 +
 doc/api/doxy-api.conf                       |    1 +
 lib/Makefile                                |    1 +
 lib/librte_cryptodev/Makefile               |   60 ++
 lib/librte_cryptodev/rte_crypto.h           |  720 +++++++++++++++++
 lib/librte_cryptodev/rte_crypto_version.map |   40 +
 lib/librte_cryptodev/rte_cryptodev.c        | 1126 +++++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h        |  592 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h    |  577 ++++++++++++++
 lib/librte_eal/common/include/rte_common.h  |   15 +
 lib/librte_eal/common/include/rte_eal.h     |   14 +
 lib/librte_eal/common/include/rte_log.h     |    1 +
 lib/librte_eal/common/include/rte_memory.h  |   14 +-
 lib/librte_ether/rte_ethdev.c               |   30 -
 lib/librte_mbuf/rte_mbuf.c                  |    1 +
 lib/librte_mbuf/rte_mbuf.h                  |   53 +-
 mk/rte.app.mk                               |    1 +
 19 files changed, 3230 insertions(+), 34 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_crypto_version.map
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..3313a8e 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -147,6 +147,13 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
+CONFIG_RTE_MAX_CRYPTOPORTS=32
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..4ba0299 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTO_MAX_XFORM_CHAIN_LENGTH=2
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..6ed9b76
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = libcryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..3fe4db7
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,720 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/**
+ * This enumeration lists different types of crypto operations supported by rte
+ * crypto devices. The operation type is defined during session registration and
+ * cannot be changed for a session once it has been setup, or if using a
+ * session-less crypto operation it is defined within the crypto operation
+ * op_params.
+ */
+enum rte_crypto_operation_chain {
+	RTE_CRYPTO_SYM_OP_CIPHER_ONLY,
+	/**< Cipher only operation on the data */
+	RTE_CRYPTO_SYM_OP_HASH_ONLY,
+	/**< Hash only operation on the data */
+	RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER,
+	/**<
+	 * Chain a hash followed by any cipher operation.
+	 *
+	 * If it is required that the result of the hash (i.e. the digest)
+	 * is going to be included in the data to be ciphered, then:
+	 *
+	 * - The digest MUST be placed in the destination buffer at the
+	 *   location corresponding to the end of the data region to be hashed
+	 *   (hash_start_offset + message length to hash),  i.e. there must be
+	 *   no gaps between the start of the digest and the end of the data
+	 *   region to be hashed.
+	 *
+	 * - The message length to cipher member of the rte_crypto_op_data
+	 *   structure must be equal to the overall length of the plain text,
+	 *   the digest length and any (optional) trailing data that is to be
+	 *   included.
+	 *
+	 * - The message length to cipher must be a multiple to the block
+	 *   size if a block cipher is being used - the implementation does not
+	 *   pad.
+	 */
+	RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH,
+	/**<
+	 * Chain any cipher followed by any hash operation.The hash operation
+	 * will be performed on the ciphertext resulting from the cipher
+	 * operation.
+	 */
+};
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_SYM_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_SYM_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_SYM_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_SYM_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_SYM_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_HASH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_SYM_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_HASH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_SYM_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_SYM_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_SYM_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_SYM_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid. */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_SYM_HASH_NONE = 0,
+	/**< No hash algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_SYM_HASH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_SYM_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto operation
+	 * data structures op_params parameter MUST be set for a session-less
+	 * crypto operation.
+	 * */
+	RTE_CRYPTO_SYM_HASH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_SYM_HASH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_SYM_HASH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_SYM_HASH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_SYM_HASH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_SYM_HASH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_SYM_HASH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY,	/**< Verify digest */
+	RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE	/**< Generate digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,
+	RTE_CRYPTO_XFORM_AUTH,
+	RTE_CRYPTO_XFORM_CIPHER
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;	/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;	/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op_data {
+	enum rte_crypto_op_sess_type type;
+
+	struct rte_mbuf *dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location. */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_SYM_HASH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of
+			  * operation, this field specifies the start of the AAD data in
+			  * the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer that
+			  * the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field should
+			  * be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up for
+		 * the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by the
+		 *   implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   padding to round this up to the nearest multiple of the
+		 *   block size (16 bytes).  Padding will be added by the
+		 *    implementation.
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+	} additional_auth; /**< Additional authentication parameters */
+
+	struct rte_mempool *pool;	/**< mempool used to allocate crypto op */
+};
+
+
+
+struct crypto_op_pool_private {
+	unsigned max_nb_xforms;
+};
+
+
+extern struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned nb_ops,
+		unsigned cache_size, unsigned nb_xforms, int socket_id);
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op_data *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+}
+
+static inline struct rte_crypto_op_data *
+__rte_crypto_op_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_crypto_op_data *)buf;
+}
+
+/**
+ * Create an crypto operation structure which is used to define the crypto
+ * operation processing which is to be done on a packet.
+ *
+ * @param	dev_id		Device identifier
+ * @param	m_src		Source mbuf of data for processing.
+ * @param	m_dst		Destination mbuf for processed data. Can be NULL
+ *				if crypto operation is done in place.
+ */
+static inline struct rte_crypto_op_data *
+rte_crypto_op_alloc(struct rte_mempool *mp)
+{
+	struct rte_crypto_op_data *op = __rte_crypto_op_raw_alloc(mp);
+
+	if (op != NULL)
+		__rte_crypto_op_reset(op);
+	return op;
+}
+
+static inline int
+rte_crypto_op_bulk_alloc(struct rte_mempool *mp,
+		struct rte_crypto_op_data **ops,
+		unsigned nb_ops) {
+	void *objs[nb_ops];
+	unsigned i;
+
+	if (rte_mempool_get_bulk(mp, objs, nb_ops) < 0)
+		return -1;
+
+	for (i = 0; i < nb_ops; i++) {
+		ops[i] = objs[i];
+		__rte_crypto_op_reset(ops[i]);
+	}
+
+	return nb_ops;
+
+}
+
+static inline struct rte_crypto_op_data *
+rte_crypto_op_alloc_sessionless(struct rte_mempool *mp, unsigned nb_xforms)
+{
+	struct rte_crypto_op_data *op = NULL;
+	struct rte_crypto_xform *xform = NULL;
+	struct crypto_op_pool_private *priv_data =
+					(struct crypto_op_pool_private *)
+					rte_mempool_get_priv(mp);
+
+	if (nb_xforms > priv_data->max_nb_xforms && nb_xforms > 0)
+		return op;
+
+	op = __rte_crypto_op_raw_alloc(mp);
+	if (op != NULL) {
+		__rte_crypto_op_reset(op);
+
+		xform = op->xform = (struct rte_crypto_xform *)(op + 1);
+
+		do {
+			xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+			xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+		} while (xform);
+	}
+	return op;
+}
+
+
+/**
+ * Free operation structure free function
+ *
+ * @param	op	Crypto operation data structure to be freed
+ */
+static inline void
+rte_crypto_op_free(struct rte_crypto_op_data *op)
+{
+	if (op != NULL)
+		rte_mempool_put(op->pool, op);
+}
+
+
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op_data *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_crypto_version.map b/lib/librte_cryptodev/rte_crypto_version.map
new file mode 100644
index 0000000..c93fcad
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto_version.map
@@ -0,0 +1,40 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_count;
+	rte_cryptodev_configure;
+	rte_cryptodev_start;
+	rte_cryptodev_stop;
+	rte_cryptodev_close;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_info_get;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_create_crypto_op;
+	rte_cryptodev_crypto_op_free;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_virtual_dev_init;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_callback_process;
+
+	local: *;
+};
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..d45feb0
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1126 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= NULL,
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;                /**< Callback address */
+	void *cb_arg;                           /**< Parameter for callback */
+	enum rte_cryptodev_event_type event;          /**< Interrupt event type */
+	uint32_t active;                        /**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0 &&
+				rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED)
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (dev->pci_dev)
+		return dev->pci_dev->numa_node;
+	else
+		return 0;
+}
+
+static inline void
+rte_cryptodev_data_alloc(int socket_id)
+{
+	const unsigned flags = 0;
+	const struct rte_memzone *mz;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve("rte_cryptodev_data",
+				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data),
+				socket_id, flags);
+	} else
+		mz = rte_memzone_lookup("rte_cryptodev_data");
+	if (mz == NULL)
+		rte_panic("Cannot allocate memzone for the crypto device data");
+
+	cryptodev_globals.data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(cryptodev_globals.data, 0,
+				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data));
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	uint8_t dev_id;
+	struct rte_cryptodev *cryptodev;
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	if (cryptodev_globals.data == NULL)
+		rte_cryptodev_data_alloc(socket_id);
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+	cryptodev->data = &cryptodev_globals.data[dev_id];
+	snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s", name);
+	cryptodev->data->dev_id = dev_id;
+	cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+	cryptodev->pmd_type = type;
+	cryptodev_globals.nb_devs++;
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc("%s private structure",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE, rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%u device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free ether device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/* Register PCI driver for physical device intialisation during
+	 * PCI probing */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+int
+rte_cryptodev_pmd_attach(const char *devargs __rte_unused,
+			uint8_t *dev_id __rte_unused)
+{
+	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
+	return -1;
+}
+
+int
+rte_cryptodev_pmd_detach(uint8_t dev_id __rte_unused,
+			char *name __rte_unused)
+{
+	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
+	return -1;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs, int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release, -ENOTSUP);
+
+		qp = dev->data->queue_pairs;
+
+		for (i = nb_qpairs; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_pair_release)(dev, i);
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+static void
+rte_cryptodev_config_restore(uint8_t dev_id __rte_unused)
+{
+}
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	rte_cryptodev_config_restore(dev_id);
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* We can't close the device if there are outstanding session in
+	 * existence */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -ENOTEMPTY;
+		}
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	dev->data->dev_started = 0;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) + priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if (dev->data->session_pool->elt_size != elt_size ||
+				dev->data->session_pool->cache_size < obj_cache_size ||
+				dev->data->session_pool->size < nb_objs) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different "
+					"initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
+
+
+static void
+rte_crypto_op_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_crypto_op_data *op_data = _op_data;
+
+	memset(op_data, 0, mp->elt_size);
+
+	op_data->pool = mp;
+}
+
+static void
+rte_crypto_op_pool_init(__rte_unused struct rte_mempool *mp,
+		__rte_unused void *opaque_arg)
+{
+}
+
+struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned size,
+		unsigned cache_size, unsigned nb_xforms, int socket_id)
+{
+	struct crypto_op_pool_private *priv_data = NULL;
+
+	unsigned elt_size = sizeof(struct rte_crypto_op_data) +
+			(sizeof(struct rte_crypto_xform) * nb_xforms);
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv_data = (struct crypto_op_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv_data->max_nb_xforms <  nb_xforms ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			CDEV_LOG_ERR("%s mempool already exists with "
+					"incompatible initialisation parameters",
+					name);
+			return NULL;
+		}
+		CDEV_LOG_DEBUG("%s mempool already exists, reusing!", name);
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,				/* mempool name */
+			size,				/* number of elements*/
+			elt_size,			/* element size*/
+			cache_size,			/* Cache size*/
+			sizeof(struct crypto_op_pool_private),	/* private data size */
+			rte_crypto_op_pool_init,	/* pool initialisation constructor */
+			NULL,				/* pool initialisation constructor argument */
+			rte_crypto_op_init,		/* obj constructor */
+			NULL,				/* obj constructor argument */
+			socket_id,			/* socket id */
+			0);				/* flags */
+
+	if (mp == NULL) {
+		CDEV_LOG_ERR("failed to allocate %s mempool", name);
+		return NULL;
+	}
+
+	priv_data = (struct crypto_op_pool_private *)rte_mempool_get_priv(mp);
+
+	priv_data->max_nb_xforms = nb_xforms;
+
+	CDEV_LOG_DEBUG("%s mempool created!", name);
+	return mp;
+}
+
+
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..e436356
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,592 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_AESNI_MB_PMD = 1,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+	uint16_t max_queue_pairs;		/**< Maximum number of queues
+						* pairs supported by device.
+						*/
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;		/**< Accumulated time processing operation */
+	uint64_t t_min;			/**< Max time */
+	uint64_t t_max;			/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf;	/**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;	/**< Number of queue pairs to configure
+					* on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;	/**< Driver for this device */
+	struct rte_cryptodev_data *data;		/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;		/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;			/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;		/**< Crypto device type */
+	enum pmd_type pmd_type;				/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;	/**< Flag indicating the device is attached */
+};
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each crypto device.
+ *
+ * This structure is safe to place in shared memory to be common among different
+ * processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;				/**< Device ID for this instance */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;		/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;	/**< Session memory pool */
+	void **queue_pairs;		/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;	/**< Number of device queue pairs. */
+
+	void *dev_private;		/**< PMD-specific private data */
+};
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function until
+ * a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..9a6271e
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,577 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call them
+ * directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n", \
+			__func__, __LINE__, ## args); \
+	} while (0)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			dev, __func__, __LINE__, ## args); \
+	} while (0)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...) do {                        \
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n", \
+				__func__, __LINE__, ## args); \
+	} while (0)
+
+#define CDEV_PMD_TRACE(fmt, args...) do {                        \
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n", dev, __func__, ## args); \
+	} while (0)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;		/**< Device information array */
+	struct rte_cryptodev_data *data;	/**< Device private data */
+	uint8_t nb_devs;			/**< Number of devices found */
+	uint8_t max_devs;			/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++) {
+		if (rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED &&
+				strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0)
+			return &rte_cryptodev_globals->devs[i];
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures. The size of the pool
+ * is configured at compile-time in the <rte_cryptodev.c> file.
+ */
+extern struct rte_cryptodev rte_crypto_devices[];
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ */
+typedef void (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ *  */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ * */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ * */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;		/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;		/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;		/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;		/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;		/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int  socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+/**
+ * Attach a new device specified by arguments.
+ *
+ * @param devargs
+ *  A pointer to a string array describing the new device
+ *  to be attached. The string should be a pci address like
+ *  '0000:01:00.0' or virtual device name like 'crypto_pcap0'.
+ * @param dev_id
+ *  A pointer to a identifier actually attached.
+ * @return
+ *  0 on success and dev_id is filled, negative on error
+ */
+extern int
+rte_cryptodev_pmd_attach(const char *devargs, uint8_t *dev_id);
+
+/**
+ * Detach a device specified by identifier.
+ *
+ * @param dev_id
+ *   The identifier of the device to detach.
+ * @param addr
+ *  A pointer to a device name actually detached.
+ * @return
+ *  0 on success and devname is filled, negative on error
+ */
+extern int
+rte_cryptodev_pmd_detach(uint8_t dev_id, char *devname);
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h
index 3121314..bae4054 100644
--- a/lib/librte_eal/common/include/rte_common.h
+++ b/lib/librte_eal/common/include/rte_common.h
@@ -270,8 +270,23 @@ rte_align64pow2(uint64_t v)
 		_a > _b ? _a : _b; \
 	})
 
+
 /*********** Other general functions / macros ********/
 
+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return retval; \
+	} \
+} while (0)
+
+#define FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return; \
+	} \
+} while (0)
+
 #ifdef __SSE2__
 #include <emmintrin.h>
 /**
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index f36a792..948cc0a 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -115,6 +115,20 @@ enum rte_lcore_role_t rte_eal_lcore_role(unsigned lcore_id);
  */
 enum rte_proc_type_t rte_eal_process_type(void);
 
+#define PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return retval; \
+	} \
+} while (0)
+
 /**
  * Request iopl privilege for all RPL.
  *
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..40e8d43 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment.
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b309309..bff6744 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -77,36 +77,6 @@
 #define PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
-/* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
 /* Macros to check for valid port */
 #define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
 	if (!rte_eth_dev_is_valid_port(port_id)) {		\
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index c18b438..b7a2498 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -271,6 +271,7 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
+	case PKT_TX_CRYPTO_OP: return "PKT_TX_CRYPTO_OP";
 	case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
 	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
 	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index d7c9030..281486d 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -98,14 +98,16 @@ extern "C" {
 #define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */
 #define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
 #define PKT_RX_QINQ_PKT      (1ULL << 15)  /**< RX packet with double VLAN stripped. */
+#define PKT_RX_CRYPTO_DIGEST_BAD (1ULL << 16) /**< Crypto hash digest verification failed. */
 /* add new RX flags here */
 
 /* add new TX flags here */
 
+#define PKT_TX_CRYPTO_OP	(1ULL << 48) /**< Valid Crypto Operation attached to mbuf */
 /**
  * Second VLAN insertion (QinQ) flag.
  */
-#define PKT_TX_QINQ_PKT    (1ULL << 49)   /**< TX packet with double VLAN inserted. */
+#define PKT_TX_QINQ_PKT		(1ULL << 49) /**< TX packet with double VLAN inserted. */
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
@@ -728,6 +730,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque accelerator operations declarations */
+struct rte_crypto_op_data;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +846,8 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+	/* Crypto Accelerator operation */
+	struct rte_crypto_op_data *crypto_op;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
@@ -1622,6 +1629,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
@@ -1790,6 +1824,23 @@ static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
  */
 void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
 
+
+
+/**
+ * Attach a crypto operation to a mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param op
+ *   The crypto operation data structure to attach.
+ */
+static inline void
+rte_pktmbuf_attach_crypto_op(struct rte_mbuf *m, struct rte_crypto_op_data *op)
+{
+	m->crypto_op = op;
+	m->ol_flags |= PKT_TX_CRYPTO_OP;
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9e1909e..4a3c41b 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lcryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH 2/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-10-02 23:01 ` Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 3/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

From: John Griffin <john.griffin@intel.com>

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver which may be downloaded from
01.org (please see the file docs/guides/cryptodevs/qat.rst contained in
this a subsquent patch in this patchset).

This is a limited patchset which has support for a chain of cipher and
hash the following algorithms are supported:

Cipher algorithms:
  - RTE_CRYPTO_SYM_CIPHER_AES128_CBC
  - RTE_CRYPTO_SYM_CIPHER_AES256_CBC
  - RTE_CRYPTO_SYM_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_SYM_HASH_SHA1_HMAC
  - RTE_CRYPTO_SYM_HASH_SHA256_HMAC
  - RTE_CRYPTO_SYM_HASH_SHA512_HMAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |  16 +-
 config/common_linuxapp                             |  14 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 173 +++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 +++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 576 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 505 ++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 111 ++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 372 +++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   5 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 130 +++++
 mk/rte.app.mk                                      |   3 +
 18 files changed, 3234 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 3313a8e..a9ac5cb 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -154,6 +154,20 @@ CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
 CONFIG_RTE_MAX_CRYPTOPORTS=32
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=y
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 4ba0299..be38822 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTO_MAX_XFORM_CHAIN_LENGTH=2
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..9529f30
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..d2b79c6
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,173 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) = (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..cc96d45
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..7671465
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d8fe38
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..fb3a685
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..da6ddcd
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,576 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+	* Redistributions of source code must retain the above copyright
+	  notice, this list of conditions and the following disclaimer.
+	* Redistributions in binary form must reproduce the above copyright
+	  notice, this list of conditions and the following disclaimer in
+	  the documentation and/or other materials provided with the
+	  distribution.
+	* Neither the name of Intel Corporation nor the names of its
+	  contributors may be used to endorse or promote products derived
+	  from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/* returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+				memset(out - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ, 0,
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	} else {
+		rte_memcpy(ipad, auth_key, auth_keylen);
+		rte_memcpy(opad, auth_key, auth_keylen);
+	}
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/* state len is a multiple of 8, so may be larger than the digest.
+	   Put the partial hash of opad state_len bytes after state1 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(cdesc->qat_mode,
+			cdesc->qat_cipher_alg, key_convert, cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/* write (the length of AAD) into bytes 16-19 of state2
+		* in big-endian format. This field is 8 bytes */
+		*(uint32_t *)&(hash->sha.state1[ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					 ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1), &state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..48961f0
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,505 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift);
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_SYM_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_SYM_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_SYM_CIPHER_NULL:
+	case RTE_CRYPTO_SYM_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_SYM_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_SYM_CIPHER_AES_ECB:
+	case RTE_CRYPTO_SYM_CIPHER_AES_CTR:
+	case RTE_CRYPTO_SYM_CIPHER_AES_CCM:
+	case RTE_CRYPTO_SYM_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_SYM_HASH_AES_GCM:
+	case RTE_CRYPTO_SYM_HASH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		/* TODO what about ICP_QAT_HW_AUTH_ALGO_GALOIS_64 ? */
+		break;
+	case RTE_CRYPTO_SYM_HASH_NONE:
+	case RTE_CRYPTO_SYM_HASH_SHA1:
+	case RTE_CRYPTO_SYM_HASH_SHA256:
+	case RTE_CRYPTO_SYM_HASH_SHA512:
+	case RTE_CRYPTO_SYM_HASH_SHA224:
+	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
+	case RTE_CRYPTO_SYM_HASH_SHA384:
+	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
+	case RTE_CRYPTO_SYM_HASH_MD5:
+	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_CCM:
+	case RTE_CRYPTO_SYM_HASH_KASUMI_F9:
+	case RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2:
+	case RTE_CRYPTO_SYM_HASH_AES_CMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_CBC_MAC:
+	case RTE_CRYPTO_SYM_HASH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t nb_pkts_sent = 0;
+	struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	int ret = 0;
+
+	queue = &(tmp_qp->tx_q);
+	while (nb_pkts_sent != nb_pkts) {
+		if (rte_atomic16_add_return(&tmp_qp->inflights16, 1) >
+				 queue->max_inflights) {
+			rte_atomic16_sub(&tmp_qp->inflights16, 1);
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			(uint8_t *)queue->base_addr + queue->tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+
+		queue->tail = adf_modulo(queue->tail +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue->tail);
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)((uint8_t *)queue->base_addr + queue->head);
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG && msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+						resp_msg->comn_hdr.comn_status)) {
+			rx_mbuf->ol_flags |= PKT_RX_CRYPTO_DIGEST_BAD;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+					queue->msg_size,
+					ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr + queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_crypto_op_data *rte_op_data = mbuf->crypto_op;
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	struct icp_qat_fw_la_bulk_req *qat_req;
+
+	if (unlikely(rte_op_data->type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented requests "
+				"mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(rte_op_data->session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)rte_op_data->session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length = qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr = qat_req->comn_mid.src_data_addr
+							= rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = rte_op_data->data.to_cipher.length;
+	cipher_param->cipher_offset = rte_op_data->data.to_cipher.offset;
+	if (rte_op_data->iv.length &&
+		(rte_op_data->iv.length <= sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array, rte_op_data->iv.data,
+							rte_op_data->iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = rte_op_data->iv.phys_addr;
+	}
+	if (rte_op_data->digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(qat_req->comn_hdr.serv_specif_flags,
+					ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = rte_op_data->digest.phys_addr;
+	}
+	auth_param->auth_off = rte_op_data->data.to_hash.offset;
+	auth_param->auth_len = rte_op_data->data.to_hash.length;
+	auth_param->u1.aad_adr = rte_op_data->additional_auth.phys_addr;
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+							ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+						struct rte_cryptodev_info *info)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..c58d833
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,111 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*	This macro rounds up a number to a be a multiple of
+ *	the alignment when the alignment is a power of 2    */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;	 /* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+void qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..226e67a
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,372 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2 /* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *qp_name, uint32_t queue_size, int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(qp_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already allocated for %s", qp_name);
+			return mz;
+		} else {
+			PMD_DRV_LOG(ERR, "Incompatible memzone already allocated %s, "
+					"size %u, socket %d. Requested size %u, socket %u",
+					qp_name, (uint32_t)mz->len, mz->socket_id,
+					queue_size, socket_id);
+			return NULL;
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					qp_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(qp_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(qp_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+
+	PMD_INIT_FUNC_TRACE();
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if ((dev->pci_dev->mem_resource == NULL) ||
+		(dev->pci_dev->mem_resource[0].addr == NULL)) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space (UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >= (ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		qat_crypto_sym_qp_release(dev, queue_pair_id);
+		dev->data->queue_pairs[queue_pair_id] = NULL;
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp queue", sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+			queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+			queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		goto create_err;
+	}
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+void qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp = (struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes, socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr, queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n", queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size)) != 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u ",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..fcf5bb3
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,5 @@
+DPDK_2.0 {
+	global:
+
+	local: *;
+};
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..49a936f
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,130 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
+
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 4a3c41b..dd48bea 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -144,6 +144,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH 3/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 2/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-10-02 23:01 ` Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 4/6] docs: add getting started guides for multi-buffer pmd and qat pmd Declan Doherty
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

 - RTE_CRYPTO_SYM_CIPHER_AES128_CBC
 - RTE_CRYPTO_SYM_CIPHER_AES256_CBC
 - RTE_CRYPTO_SYM_CIPHER_AES512_CBC
 - RTE_CRYPTO_SYM_HASH_SHA1_HMAC
 - RTE_CRYPTO_SYM_HASH_SHA256_HMAC
 - RTE_CRYPTO_SYM_HASH_SHA512_HMAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_linuxapp                             |   6 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  67 +++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 206 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 632 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 295 ++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 210 +++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   5 +
 mk/rte.app.mk                                      |   4 +
 9 files changed, 1426 insertions(+)
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/config/common_linuxapp b/config/common_linuxapp
index be38822..a3ba0f9 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,12 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 9529f30..26325b0 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..62f51ce
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# export include files
+SYMLINK-y-include +=
+
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..1188278
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,206 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+#include <gcm_defines.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_192_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)(void *key, void *exp_k1, void *k2, void *k3);
+
+typedef void (*aesni_gcm_t)(gcm_data *my_ctx_data, u8 *out, const u8 *in,
+		u64 plaintext_len, u8 *iv, const u8 *aad, u64 aad_len,
+		u8 *auth_tag, u64 auth_tag_len);
+
+typedef void (*aesni_gcm_precomp_t)(gcm_data *my_ctx_data, u8 *hash_subkey);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;		/**< Initialise scheduler  */
+		get_next_job_t get_next;	/**< Get next free job structure */
+		submit_job_t submit;		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job; /**< Get completed job */
+		flush_job_t flush_job;		/**< flush jobs from manager */
+	} job; /**< multi buffer manager functions */
+	struct {
+		struct {
+			md5_one_block_t md5;		/**< MD5 one block hash */
+			sha1_one_block_t sha1;		/**< SHA1 one block hash */
+			sha224_one_block_t sha224;	/**< SHA224 one block hash */
+			sha256_one_block_t sha256;	/**< SHA256 one block hash */
+			sha384_one_block_t sha384;	/**< SHA384 one block hash */
+			sha512_one_block_t sha512;	/**< SHA512 one block hash */
+		} one_block; /**< one block hash functions */
+		struct {
+			aes_keyexp_128_t aes128;	/**< AES128 key expansions */
+			aes_keyexp_192_t aes192;	/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;	/**< AES256 key expansions */
+			aes_xcbc_expand_key_t aes_xcbc;	/**< AES XCBC key expansions */
+		} keyexp;	/**< Key expansion functions */
+	} aux; /**< Auxiliary functions */
+	struct {
+		aesni_gcm_t enc;		/**< MD5 encode */
+		aesni_gcm_t dec;		/**< GCM decode */
+		aesni_gcm_precomp_t precomp;	/**< GCM pre-compute */
+	} gcm; /**< GCM functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = { NULL },
+			.aux = {
+				.one_block = { NULL },
+				.keyexp = { NULL }
+			},
+			.gcm = { NULL
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			},
+			.gcm = {
+				aesni_gcm_enc_sse,
+				aesni_gcm_dec_sse,
+				aesni_gcm_precomp_sse
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+				.job = {
+					init_mb_mgr_avx,
+					get_next_job_avx,
+					submit_job_avx,
+					get_completed_job_avx,
+					flush_job_avx
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx,
+						sha1_one_block_avx,
+						sha224_one_block_avx,
+						sha256_one_block_avx,
+						sha384_one_block_avx,
+						sha512_one_block_avx
+					},
+					.keyexp = {
+						aes_keyexp_128_avx,
+						aes_keyexp_192_avx,
+						aes_keyexp_256_avx,
+						aes_xcbc_expand_key_avx
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen2,
+					aesni_gcm_dec_avx_gen2,
+					aesni_gcm_precomp_avx_gen2
+				}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+				.job = {
+					init_mb_mgr_avx2,
+					get_next_job_avx2,
+					submit_job_avx2,
+					get_completed_job_avx2,
+					flush_job_avx2
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx2,
+						sha1_one_block_avx2,
+						sha224_one_block_avx2,
+						sha256_one_block_avx2,
+						sha384_one_block_avx2,
+						sha512_one_block_avx2
+					},
+					.keyexp = {
+						aes_keyexp_128_avx2,
+						aes_keyexp_192_avx2,
+						aes_keyexp_256_avx2,
+						aes_xcbc_expand_key_avx2
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen4,
+					aesni_gcm_dec_avx_gen4,
+					aesni_gcm_precomp_avx_gen4
+				}
+		},
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..281cfa7
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,632 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/* multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_SYM_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess, cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_crypto_op(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, struct rte_mbuf *m)
+{
+	struct rte_crypto_op_data *c_op = m->crypto_op;
+	struct aesni_mb_session *priv_sess = NULL;
+
+	if (c_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (c_op->session->type != RTE_CRYPTODEV_AESNI_MB_PMD)
+			return NULL;
+
+		priv_sess = (struct aesni_mb_session *)c_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&sess))
+			return NULL;
+
+		priv_sess = (struct aesni_mb_session *)sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->mb_ops,
+				priv_sess, c_op->xform) != 0))
+			return NULL;
+	}
+
+	/* Set crypto operation */
+	job->chain_order = priv_sess->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = priv_sess->cipher.direction;
+	job->cipher_mode = priv_sess->cipher.mode;
+
+	job->aes_key_len_in_bytes = priv_sess->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = priv_sess->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = priv_sess->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = priv_sess->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = priv_sess->auth.xcbc.k1_expanded;
+		job->_k2 = priv_sess->auth.xcbc.k2;
+		job->_k3 = priv_sess->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = priv_sess->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = priv_sess->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/* Multiple buffer library current only support returning a truncated digest length
+	 * as specified in the revelent IPsec RFCs */
+	job->auth_tag_output_len_in_bytes = get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst ? rte_pktmbuf_mtod(c_op->dst, uint8_t *) :
+			rte_pktmbuf_mtod(m, uint8_t *) + c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+
+	return job;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_job(JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+
+	/* Verify digest if required */
+	if (job->chain_order == HASH_CIPHER) {
+		if (memcmp(job->auth_tag_output, m->crypto_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			m->ol_flags |= PKT_RX_CRYPTO_DIGEST_BAD;
+		else
+			m->ol_flags &= ~PKT_RX_CRYPTO_DIGEST_BAD;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_job(job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->mb_ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair,
+		struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+	JOB_AES_HMAC *job = NULL;
+
+	int i, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+
+		if (unlikely(!(bufs[i]->ol_flags & PKT_TX_CRYPTO_OP))) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = (*qp->mb_ops->job.get_next)(&qp->mb_mgr);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = process_crypto_op(qp, job, bufs[i]);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		/* Submit Job */
+		job = (*qp->mb_ops->job.submit)(&qp->mb_mgr);
+		qp->qp_stats.enqueued_count++;
+
+		/* If submit return a processed job then handle it, before
+		 * submitting subsequent jobs */
+		if (job)
+			processed_jobs += handle_completed_jobs(qp, job);
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.dequeued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/* if we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling */
+	job = (*qp->mb_ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.dequeued_count += handle_completed_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+	unsigned i, nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+
+	for (i = 0; i < nb_dequeued; i++) {
+		/* Free session if a session-less crypto op */
+		if (bufs[i]->crypto_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+			rte_mempool_put(qp->sess_mp,
+					bufs[i]->crypto_op->session);
+			bufs[i]->crypto_op->session = NULL;
+		}
+	}
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_qpairs = AESNI_MB_MAX_NB_QUEUE_PAIRS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..5682900
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,295 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return -ENOTSUP;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_queue_pairs = internals->max_nb_qpairs;
+	}
+}
+
+/** Release queue pair */
+static void
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->mb_ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->mb_ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/* Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
+
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..1c4e382
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,210 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define AESNI_MB_NAME_MAX_LENGTH	(64)
+#define AESNI_MB_MAX_NB_QUEUE_PAIRS	(4)
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+
+	unsigned max_nb_qpairs;
+};
+
+struct aesni_mb_qp {
+	uint16_t id;				/**< Queue Pair Identifier */
+	char name[AESNI_MB_NAME_MAX_LENGTH];	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *mb_ops;	/**< Architecture dependent
+						 * function pointer table of
+						 * the multi-buffer APIs */
+	MB_MGR mb_mgr;				/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;	/**< Ring for placing process packets */
+
+	struct rte_mempool *sess_mp;		/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI Multi buffer session */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	struct {
+		JOB_CIPHER_DIRECTION direction;	/**< Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_MODE mode;		/**< Cipher mode - CBC / Counter */
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[256] __rte_aligned(16);	/**< encode key */
+			uint32_t decode[256] __rte_aligned(16);	/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to contain the
+		 * maximum expanded key size which is 240 bytes for 256 bit
+		 * AES, calculate by: ((key size (bytes)) * ((number of rounds) + 1)) */
+	} cipher;	/**< Cipher Parameters */
+
+	struct {
+		JOB_HASH_ALG algo;	/** Authentication Algorithm */
+		union {
+			struct {
+				uint8_t inner[128] __rte_aligned(16);	/**< inner pad */
+				uint8_t outer[128] __rte_aligned(16);	/**< outer pad */
+			} pads;
+			/** HMAC Authentication pads - allocating space for the maximum
+			 * pad size supported which is 128 bytes for SHA512 */
+
+			struct {
+			    uint32_t k1_expanded[44] __rte_aligned(16);	/* k1 (expanded key). */
+			    uint8_t k2[16] __rte_aligned(16);		/* k2. */
+			    uint8_t k3[16] __rte_aligned(16);		/* k3. */
+			} xcbc;
+			/** Expanded XCBC authentication keys */
+		};
+
+		uint8_t digest[64] __rte_aligned(16);
+	} auth;	/**< Authentication Parameters */
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
+
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..39cc84f
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,5 @@
+DPDK_2.2 {
+	global:
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index dd48bea..ec251d0 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -147,6 +147,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH 4/6] docs: add getting started guides for multi-buffer pmd and qat pmd
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
                   ` (2 preceding siblings ...)
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 3/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-10-02 23:01 ` Declan Doherty
  2015-10-21 11:34   ` Thomas Monjalon
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 doc/guides/cryptodevs/aesni_mb.rst |  76 ++++++++++++++++++
 doc/guides/cryptodevs/index.rst    |  43 ++++++++++
 doc/guides/cryptodevs/qat.rst      | 155 +++++++++++++++++++++++++++++++++++++
 doc/guides/index.rst               |   1 +
 4 files changed, 275 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst

diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..8949fd0
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,43 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    aesni_mb
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..c5c7b2b
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,155 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC hardware accelerator. QAT PMD has
+current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+QAT PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD
+Future kernel versions will provide this as standard, in the interim the
+following steps are necessary to load this driver.
+
+
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices available per DH895xCC device.
+
+The unbind command below assumes bdfs of 02:01.00-02:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:02:0${device}.${fn} > /sys/bus/pci/devices/0000\:02\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
+
+
+Notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH 5/6] app/test: add cryptodev unit and performance tests
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
                   ` (3 preceding siblings ...)
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 4/6] docs: add getting started guides for multi-buffer pmd and qat pmd Declan Doherty
@ 2015-10-02 23:01 ` Declan Doherty
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 6/6] l2fwd-crypto: crypto Declan Doherty
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 app/test/Makefile                  |    3 +
 app/test/test.c                    |   92 +-
 app/test/test.h                    |   34 +-
 app/test/test_cryptodev.c          | 1993 ++++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h          |   68 ++
 app/test/test_cryptodev_perf.c     | 1415 +++++++++++++++++++++++++
 app/test/test_link_bonding.c       |    6 +-
 app/test/test_link_bonding_mode4.c |    7 +-
 8 files changed, 3573 insertions(+), 45 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/app/test/Makefile b/app/test/Makefile
index 294618f..b7de576 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -140,6 +140,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding_mode4.c
 endif
 
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
diff --git a/app/test/test.c b/app/test/test.c
index e8992f4..e58f266 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..7a181d0
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1993 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *crypto_op_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op_data *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static void
+free_testsuite_mbufs(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	/* free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+}
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+	
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create("CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->crypto_op_pool = rte_crypto_op_pool_create("CRYPTO_OP_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, DEFAULT_NUM_XFORMS, rte_socket_id());
+	if (ts_params->crypto_op_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type) {
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+		}
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		/* Since we can't free and re-allocate queue memory always set the
+		 * queues on this device up to max size first so enough memory is
+		 * allocated for any later re-configures needed by other tests */
+
+		ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs =
+				(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+						RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+						RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < MAX_NUM_QPS_PER_QAT_DEVICE; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->crypto_op_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->crypto_op_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/* Now reconfigure queues to size we actually want to use in this
+	 * test suite. */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->op)
+		rte_crypto_op_free(ut_params->op);
+
+	/* just in case test didn't free mbufs */
+	free_testsuite_mbufs();
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count -1];
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u, invalid qps: %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+				"Failed test for rte_cryptodev_configure, dev_id %u, invalid qps: %u",
+				ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+				"Failed test for rte_cryptodev_configure, dev_id %u, invalid qps: %u",
+				ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	ts_params->conf.session_mp.nb_objs = RTE_LIBRTE_PMD_QAT_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/* Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created. */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2); /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE; 		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+static const uint8_t catch_22_480_bytes_SHA1_digest[] = {
+	0xae, 0xd5, 0x60, 0x7e, 0xf5, 0x37, 0xe2, 0xf6,
+	0x28, 0x68, 0x71, 0x91, 0xab, 0x3d, 0x34, 0xba,
+	0x20, 0xb4, 0x57, 0x05 };
+
+static const uint8_t catch_22_512_bytes_HMAC_SHA1_digest[] = {
+	0xc5, 0x1a, 0x08, 0x57, 0x3e, 0x52, 0x59, 0x75,
+	0xa5, 0x2b, 0xb9, 0xef, 0x66, 0xfc, 0xc3, 0x3b,
+	0xf0, 0xa8, 0x46, 0xbd };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+static const uint8_t catch_22_512_bytes_SHA244_digest[] = {
+	0x35, 0x86, 0x49, 0x1e, 0xdb, 0xaa, 0x9b, 0x6e,
+	0xab, 0x45, 0x19, 0xe0, 0x71, 0xae, 0xa6, 0x6b,
+	0x62, 0x46, 0x72, 0x7b, 0x3d, 0x40, 0x78, 0x25,
+	0x58, 0xde, 0xdf, 0xd0 };
+
+static const uint8_t catch_22_512_bytes_HMAC_SHA244_digest[] = {
+	0x5d, 0x4c, 0xba, 0xcc, 0x1f, 0x6e, 0x94, 0x19,
+	0xb7, 0xe4, 0x2b, 0x5f, 0x20, 0x80, 0xc7, 0xb8,
+	0x14, 0x8c, 0x6d, 0x66, 0xaa, 0xc7, 0x3d, 0x48,
+	0x68, 0x0b, 0xe4, 0x85 };
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+static const uint8_t catch_22_quote_2_1Kb_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31, 0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E, 0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E, 0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0, 0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57, 0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9, 0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D, 0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46, 0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80, 0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5, 0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2, 0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA, 0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4, 0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4, 0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54, 0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91, 0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF, 0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28, 0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7, 0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6, 0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C, 0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6, 0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6, 0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87, 0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B, 0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53, 0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26, 0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36, 0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E, 0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A, 0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4, 0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1, 0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C,
+	0XB9, 0X9F, 0X8B, 0X21, 0XC6, 0X44, 0X3F, 0XB1, 0X2A, 0XA0, 0X63, 0X9E, 0X3F, 0X26, 0X21, 0X64,
+	0X62, 0XE3, 0X54, 0X71, 0X6D, 0XE7, 0X1C, 0X10, 0X72, 0X72, 0XBB, 0X93, 0X75, 0XA0, 0X79, 0X3E,
+	0X7B, 0X6F, 0XDA, 0XF7, 0X52, 0X45, 0X4C, 0X5B, 0XF6, 0X01, 0XAD, 0X2D, 0X50, 0XBE, 0X34, 0XEE,
+	0X67, 0X10, 0X73, 0X68, 0X3D, 0X00, 0X3B, 0XD5, 0XA3, 0X8E, 0XC8, 0X9D, 0X41, 0X66, 0X0D, 0XB5,
+	0X5B, 0X93, 0X50, 0X2F, 0XBD, 0X27, 0X5C, 0XAE, 0X01, 0X8B, 0XE4, 0XB1, 0X08, 0XDD, 0XD3, 0X16,
+	0X0F, 0XFE, 0XA2, 0X40, 0X64, 0X5C, 0XE5, 0XBB, 0X3A, 0X51, 0X12, 0X27, 0XAB, 0X04, 0X4E, 0X36,
+	0XD1, 0XC4, 0X4E, 0X44, 0XF6, 0XD1, 0XFE, 0X0E, 0X3A, 0XEA, 0X9B, 0X0E, 0X76, 0XB8, 0X42, 0X68,
+	0X53, 0XD4, 0XFA, 0XBD, 0XEC, 0XD8, 0X81, 0X5D, 0X6D, 0XB7, 0X5A, 0XDF, 0X33, 0X60, 0XBB, 0X91,
+	0XBC, 0X1C, 0X1D, 0X74, 0XEA, 0X21, 0XE8, 0XF9, 0X85, 0X9E, 0XB3, 0X86, 0XB2, 0X3C, 0X73, 0X2F,
+	0X70, 0XBB, 0XBB, 0X92, 0XC4, 0XDB, 0XF4, 0X0D, 0XF8, 0X26, 0X4A, 0X30, 0X05, 0X8A, 0X78, 0X94,
+	0X0D, 0X76, 0XC2, 0XB3, 0XFF, 0X27, 0X6C, 0X3E, 0X6D, 0XFD, 0XB7, 0XA8, 0X1E, 0X7E, 0X22, 0X57,
+	0X63, 0XAF, 0X17, 0X36, 0X97, 0X5E, 0XEA, 0X22, 0X1F, 0XD1, 0X1C, 0X1D, 0X69, 0XC7, 0X1D, 0X4E,
+	0X6F, 0X44, 0X5B, 0XD0, 0X8D, 0X97, 0XE4, 0X68, 0X0A, 0XB2, 0X4E, 0X9D, 0X7D, 0X3C, 0X0A, 0X28,
+	0X81, 0X69, 0X77, 0X0C, 0X97, 0X0C, 0X62, 0X6E, 0X41, 0X1D, 0XE8, 0XEC, 0XFB, 0X07, 0X00, 0X3D,
+	0XD5, 0XBB, 0XAB, 0X9F, 0XFC, 0X9F, 0X49, 0XC9, 0XD2, 0XC9, 0XE6, 0XBB, 0X22, 0XA9, 0X61, 0X3A,
+	0X6B, 0X3C, 0XDA, 0XFD, 0XC9, 0X67, 0X3A, 0XAF, 0X53, 0X9B, 0XFA, 0X13, 0X68, 0XB5, 0XB1, 0XBD,
+	0XAC, 0X91, 0XBA, 0X3F, 0X6F, 0X82, 0X81, 0XE8, 0X1B, 0X47, 0XC4, 0XE4, 0X2D, 0X23, 0X92, 0X45,
+	0X96, 0XDA, 0X96, 0X49, 0X7D, 0XF9, 0X29, 0X2C, 0X02, 0X9E, 0XD2, 0X43, 0X45, 0X18, 0XA2, 0X13,
+	0X00, 0X93, 0X77, 0X38, 0XB8, 0X93, 0XAB, 0X1A, 0XB9, 0X64, 0XD5, 0X15, 0X3C, 0X04, 0X28, 0X6D,
+	0X66, 0X58, 0XF2, 0X20, 0XB1, 0XD7, 0X10, 0XB5, 0X14, 0XB5, 0XBF, 0X9E, 0XA8, 0X75, 0X47, 0X3C,
+	0X8C, 0XAA, 0XC9, 0X0F, 0X81, 0X79, 0X62, 0XCB, 0X64, 0X95, 0X32, 0X63, 0X16, 0XCD, 0X5D, 0X01,
+	0XF7, 0X3C, 0X1F, 0X69, 0XD8, 0X0F, 0XC6, 0X70, 0X19, 0X35, 0X76, 0XEB, 0XE4, 0XFE, 0XEA, 0XF3,
+	0X81, 0X78, 0XCD, 0XCD, 0XBA, 0X91, 0XE2, 0XDF, 0X73, 0X39, 0X5F, 0X1E, 0X7D, 0X2B, 0XEE, 0X64,
+	0X33, 0X9B, 0XB1, 0X9D, 0X1F, 0X73, 0X3D, 0XDC, 0XA9, 0X35, 0XB6, 0XC6, 0XAF, 0XE2, 0X97, 0X29,
+	0X38, 0XEE, 0X38, 0X26, 0X52, 0X98, 0X17, 0X76, 0XA3, 0X4B, 0XAF, 0X7D, 0XD0, 0X2D, 0X43, 0X52,
+	0XAD, 0X58, 0X4F, 0X0A, 0X6B, 0X4F, 0X10, 0XB9, 0X38, 0XAB, 0X3A, 0XD5, 0X77, 0XAE, 0X83, 0XF3,
+	0X8C, 0X48, 0X1A, 0XC6, 0X61, 0XCF, 0XE5, 0XA6, 0X2B, 0X5B, 0X60, 0X94, 0XFB, 0X04, 0X34, 0XFC,
+	0X0F, 0X67, 0X1F, 0XFE, 0X42, 0X0E, 0XE1, 0X58, 0X2B, 0X04, 0X11, 0XEB, 0X83, 0X74, 0X06, 0XC5,
+	0XEF, 0X83, 0XA5, 0X40, 0XCB, 0X69, 0X18, 0X7E, 0XDB, 0X71, 0XBF, 0XC2, 0XFA, 0XEF, 0XF5, 0XB9,
+	0X03, 0XF1, 0XF8, 0X78, 0X7F, 0X71, 0XE3, 0XBB, 0XDE, 0XF3, 0XC3, 0X03, 0X29, 0X9A, 0XBF, 0XD6,
+	0XCD, 0XA7, 0X35, 0XD5, 0XE8, 0X88, 0XAE, 0X89, 0XCE, 0X4B, 0X93, 0X4E, 0X04, 0X02, 0X41, 0X86,
+	0X7F, 0X4A, 0X96, 0X23, 0X19, 0X6D, 0XD1, 0X2C, 0X9C, 0X7A, 0X2C, 0X3B, 0XD6, 0X98, 0X7B, 0X4C
+};
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31, 0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E, 0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E, 0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0, 0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57, 0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9, 0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D, 0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46, 0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80, 0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5, 0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2, 0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA, 0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4, 0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4, 0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54, 0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91, 0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF, 0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28, 0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7, 0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6, 0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C, 0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6, 0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6, 0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87, 0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B, 0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53, 0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26, 0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36, 0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E, 0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A, 0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4, 0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1, 0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf,
+			QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc_sessionless(ts_params->crypto_op_pool, 2);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf,
+			QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	ut_params->op = ut_params->obuf->crypto_op;
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+							CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	ut_params->op = ut_params->obuf->crypto_op;
+
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params)
+			== TEST_SUCCESS, "Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0], ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	ut_params->op = ut_params->obuf->crypto_op;
+
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	/*
+	 * Free crypto operation structure and buffers.
+	 */
+	if (ut_params->op)
+	{
+		rte_crypto_op_free(ut_params->op);
+		ut_params->op = NULL;
+	}
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+                        CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+                        ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	/*
+	* Free crypto operation structure and buffers.
+	*/
+	if (ut_params->op)
+	{
+		rte_crypto_op_free(ut_params->op);
+		ut_params->op = NULL;
+	}
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	ut_params->op = ut_params->obuf->crypto_op;
+	TEST_ASSERT(!(ut_params->obuf->ol_flags & PKT_RX_CRYPTO_DIGEST_BAD),
+			"Digest verification failed");
+
+	/*
+	* Free crypto operation structure and buffers.
+	*/
+	if (ut_params->op)
+	{
+		rte_crypto_op_free(ut_params->op);
+		ut_params->op = NULL;
+	}
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_GCM	16 
+#define AAD_LENGTH_AES_GCM		20
+#define AAD_LENGTH_AES_GCM_ROUNDUP	32 // Roundup to multiple of GCM block size(16)
+#define CIPHER_IV_LENGTH_AES_GCM	16
+#define AUTH_TAG_LENGTH_AES_GCM		16
+
+static uint8_t gcm_cipher_key[] = {
+	0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+	0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+};
+
+static uint8_t gcm_cipher_text[] = {
+	0xCC, 0xDA, 0x4D, 0x93, 0xF9, 0x92, 0x52, 0xAD, 0x81, 0x5E, 0x5B, 0x0B, 0x0B, 0x40, 0x93, 0x74,
+	0x11, 0x65, 0xA9, 0x5C, 0x71, 0x53, 0x73, 0x4D, 0x74, 0xE3, 0x2A, 0x7B, 0xD1, 0xF8, 0x4F, 0x7C,
+	0x55, 0x86, 0x6F, 0x07, 0xAC, 0x6F, 0xF4, 0x36, 0x72, 0x30, 0x01, 0x11, 0x95, 0x4E, 0x7A, 0x00,
+	0xDD, 0xAC, 0x94, 0xA9, 0xE0, 0x63, 0x2F, 0xB3, 0xF3, 0x52, 0xEF, 0xDD, 0x29, 0xF5, 0xAB, 0xA4,
+	0xB5, 0xC9, 0x13, 0xE4, 0xBD, 0x93, 0xDC, 0x94, 0x98, 0xC7, 0x96, 0x2E, 0xA2, 0x54, 0xAB, 0x04,
+	0x60, 0xBD, 0x9A, 0xD1, 0xAA, 0xA7, 0x07, 0x57, 0x4A, 0xA4, 0x78, 0x73, 0xB4, 0x1D, 0xC4, 0x11,
+	0x82, 0x71, 0x2F, 0xCF, 0xC0, 0x5D, 0x2F, 0x87, 0x23, 0x2C, 0x64, 0xAB, 0x6A, 0x59, 0x05, 0x03,
+	0xB8, 0x91, 0xD4, 0xDD, 0x0F, 0x11, 0xA3, 0x1D, 0xE5, 0xE4, 0xD3, 0x16, 0x6E, 0x75, 0xFA, 0x3D,
+	0x8D, 0xF6, 0x8A, 0xD4, 0x9A, 0x5E, 0x84, 0x6C, 0x4B, 0x18, 0x58, 0xDF, 0x2F, 0x05, 0x56, 0xEC,
+	0xE0, 0xDE, 0xB3, 0xF1, 0x02, 0x28, 0xE9, 0x54, 0xEB, 0xF3, 0x90, 0xA1, 0x48, 0x5B, 0xC7, 0x45,
+	0x16, 0x1E, 0x66, 0x61, 0xC8, 0xDB, 0x81, 0x91, 0x65, 0x35, 0xEB, 0xEF, 0x5F, 0x22, 0x47, 0xD1,
+	0xE5, 0x9C, 0x0C, 0xA8, 0xD3, 0xAC, 0x44, 0x09, 0x00, 0x83, 0xDD, 0xE9, 0xFF, 0x9B, 0x7B, 0x00,
+	0xC5, 0xDD, 0x60, 0xDE, 0xF2, 0xFE, 0xDE, 0x8E, 0xC2, 0xA5, 0xAA, 0xD8, 0x8F, 0x3D, 0xC6, 0xDB,
+	0x42, 0xF6, 0xCC, 0x46, 0x14, 0x53, 0xC4, 0x72, 0x39, 0x87, 0xC5, 0x5A, 0xF5, 0xF5, 0xC9, 0x53,
+	0xD5, 0xF9, 0x52, 0xD1, 0xE8, 0x74, 0xAF, 0xD5, 0x92, 0xE5, 0x51, 0x13, 0x2A, 0x31, 0x0A, 0xF2,
+	0xD3, 0xF1, 0x54, 0xBD, 0xF8, 0x09, 0x93, 0x74, 0xE8, 0xB1, 0x58, 0x45, 0x7D, 0x05, 0xC9, 0x0D,
+	0x4F, 0x18, 0x21, 0xEF, 0x91, 0xF4, 0xCC, 0xC1, 0x2D, 0xEC, 0x7F, 0xAC, 0x70, 0xF6, 0x4B, 0x8F,
+	0xE4, 0x10, 0x1F, 0x3F, 0x4C, 0x6A, 0xFE, 0x69, 0x5C, 0x85, 0xDC, 0x9F, 0x1D, 0x97, 0x9C, 0xA5,
+	0xEB, 0x61, 0xBB, 0x4B, 0xA7, 0x60, 0xE4, 0x61, 0x43, 0x52, 0xB3, 0x30, 0xAB, 0x19, 0xB2, 0x82,
+	0xE7, 0x79, 0x53, 0xD2, 0x8E, 0xA5, 0x5E, 0x56, 0xB1, 0xC7, 0x10, 0xD3, 0x8C, 0xD2, 0x77, 0xDD,
+	0xAD, 0x58, 0x32, 0xA5, 0x22, 0x83, 0x3B, 0x13, 0x03, 0x26, 0x26, 0xBE, 0x3D, 0xA0, 0xB1, 0xB3,
+	0xA8, 0xD3, 0x5C, 0xCC, 0x52, 0x9A, 0xCA, 0x5D, 0x02, 0xE1, 0x80, 0x5C, 0xD1, 0xB4, 0x29, 0xD3,
+	0xA9, 0xE3, 0x2E, 0xC4, 0x3D, 0xD6, 0x39, 0xAB, 0xED, 0xEA, 0x2D, 0xEA, 0x2C, 0xE4, 0x7B, 0xED,
+	0xB9, 0x95, 0x67, 0xCF, 0x81, 0xDC, 0x38, 0x8E, 0x13, 0xB5, 0xD5, 0x94, 0x4D, 0x31, 0x97, 0x0C,
+	0x0F, 0x1B, 0x31, 0x88, 0x56, 0x77, 0x70, 0x68, 0x75, 0x42, 0x92, 0x32, 0x9C, 0x07, 0xBC, 0x3E,
+	0x73, 0xAB, 0xC9, 0x29, 0xC9, 0xF1, 0x8A, 0x98, 0x88, 0x99, 0x54, 0x44, 0xCA, 0xFC, 0x96, 0xF8,
+	0xD9, 0xB8, 0x0B, 0x74, 0xC5, 0xBF, 0x75, 0x3E, 0x35, 0x09, 0x4B, 0x16, 0xDA, 0x20, 0x8D, 0x1F,
+	0x2F, 0xFF, 0xBF, 0x7F, 0xAB, 0xBA, 0x4C, 0x95, 0x48, 0x8B, 0x94, 0xBA, 0x09, 0x1E, 0x84, 0x1C,
+	0x2D, 0x7F, 0x2F, 0x2A, 0x15, 0x1C, 0x63, 0xBF, 0x57, 0xED, 0x0B, 0x76, 0xDA, 0xDE, 0x60, 0x4C,
+	0x62, 0x49, 0x5D, 0xDE, 0x31, 0xCA, 0x87, 0xA7, 0xF7, 0x8B, 0x13, 0xE6, 0xB3, 0x5B, 0x8B, 0xC2,
+	0xE2, 0x54, 0x05, 0x12, 0x92, 0x03, 0xD2, 0xAF, 0x3E, 0xB5, 0xE3, 0x9E, 0x26, 0x27, 0x8F, 0x35,
+	0x62, 0xDC, 0xDC, 0xE2, 0x69, 0x61, 0x01, 0x0B, 0x30, 0x76, 0x50, 0x71, 0x90, 0x79, 0x8A, 0x46
+};
+
+static uint8_t gcm_aad[] = {
+	0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
+	0xfe, 0xed, 0xfa, 0xce, 0xde, 0xad, 0xbe, 0xef,
+	0xab, 0xad, 0xda, 0xcc
+};
+
+static uint8_t gcm_iv[] = {
+	0xca, 0xfe, 0xba, 0xbe, 0xfa, 0xce, 0xdb, 0xad,
+	0xde, 0xca, 0xf8, 0x88 
+};
+
+static uint8_t gcm_auth_tag[] = {
+	0xDB, 0x54, 0xA0, 0x7E, 0x65, 0xF2, 0xEF, 0x84,
+	0xF9, 0x16, 0xC0, 0xF9, 0xDE, 0x7F, 0xDE, 0xFE
+};
+
+static int test_AES_GCM(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			AUTH_TAG_LENGTH_AES_GCM);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_GCM;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = gcm_cipher_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_GCM;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_AES_GCM;
+	ut_params->auth_xform.auth.digest_length = AUTH_TAG_LENGTH_AES_GCM;
+	ut_params->auth_xform.auth.add_auth_data_length = AAD_LENGTH_AES_GCM;
+	ut_params->auth_xform.auth.key.length = 0;
+	ut_params->auth_xform.auth.key.data = NULL;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->crypto_op_pool);
+	TEST_ASSERT_NOT_NULL(ut_params->op, "Failed to allocate crypto_op");
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	/* iv */
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_GCM);
+	TEST_ASSERT_NOT_NULL(ut_params->op->iv.data, "no room to prepend iv");
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_GCM;
+
+	/* aad */
+	ut_params->op->additional_auth.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+					AAD_LENGTH_AES_GCM_ROUNDUP );
+	TEST_ASSERT_NOT_NULL(ut_params->op->additional_auth.data, "no room to prepend aad");
+	ut_params->op->additional_auth.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	rte_memcpy(ut_params->op->additional_auth.data, gcm_aad, AAD_LENGTH_AES_GCM);
+
+
+	/* CalcY0  TODO: Add for ivLen = 16 */
+	if (sizeof(gcm_iv) == 12) {
+		memset(ut_params->op->iv.data, 0, CIPHER_IV_LENGTH_AES_GCM);
+	rte_memcpy(ut_params->op->iv.data, gcm_iv, sizeof(gcm_iv));
+		ut_params->op->iv.data[15] = 1;
+	}
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_GCM + AAD_LENGTH_AES_GCM_ROUNDUP;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_GCM + AAD_LENGTH_AES_GCM_ROUNDUP;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_attach_crypto_op(ut_params->ibuf, ut_params->op);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+		ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_GCM + AAD_LENGTH_AES_GCM_ROUNDUP,
+			gcm_cipher_text,
+			QUOTE_512_BYTES,
+			"GCM Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_GCM + AAD_LENGTH_AES_GCM_ROUNDUP + QUOTE_512_BYTES,
+			gcm_auth_tag,
+			AUTH_TAG_LENGTH_AES_GCM,
+			"GCM Generated auth tag not as expected");
+
+	/*
+	* Free crypto operation structure and buffers.
+	*/
+	if (ut_params->op)
+	{
+		rte_crypto_op_free(ut_params->op);
+		ut_params->op = NULL;
+	}
+
+	free_testsuite_mbufs();
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600, &stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_crypto_devices[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats) == -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned nb_sessions = gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD ?
+			RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+			RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+	struct rte_cryptodev_session *sessions[nb_sessions + 1];
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i], "Session creation failed at session number %u", i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i], "Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0], sessions[i]);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_GCM),
+
+		TEST_CASE(test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_testsuite  = {
+	.suite_name = "Crypto Device AESNI Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_testsuite);
+}
+
+static struct test_command cryptodev_aesni_cmd = {
+	.command = "cryptodev_aesni_autotest",
+	.callback = test_cryptodev_aesni,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..c2f8fe1
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,1415 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *crypto_op_mp;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op_data *op;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->crypto_op_mp = rte_crypto_op_pool_create("CRYPTO_OP_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, DEFAULT_NUM_XFORMS, rte_socket_id());
+	if (ts_params->crypto_op_mp == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/* Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests */
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->op)
+		rte_crypto_op_free(ut_params->op);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50, 0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36, 0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c, 0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6, 0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4, 0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d, 0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d, 0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5, 0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99, 0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6, 0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e, 0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97, 0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa, 0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36, 0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41, 0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7, 0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85, 0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5, 0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6, 0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe, 0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1, 0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e, 0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07, 0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26, 0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80, 0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76, 0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7, 0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf, 0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70, 0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26, 0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30, 0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64, 0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb, 0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c, 0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a, 0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19, 0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23, 0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf, 0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe, 0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d, 0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed, 0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36, 0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7, 0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3, 0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e, 0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49, 0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf, 0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12, 0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76, 0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08, 0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b, 0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9, 0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c, 0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77, 0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb, 0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86, 0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e, 0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b, 0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11, 0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c, 0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69, 0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1, 0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06, 0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e, 0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08, 0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe, 0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f, 0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23, 0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26, 0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c, 0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7, 0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb, 0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2, 0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38, 0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02, 0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28, 0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03, 0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07, 0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61, 0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef, 0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90, 0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0, 0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3, 0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0, 0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0, 0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7, 0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9, 0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d, 0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6, 0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43, 0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1, 0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d, 0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf, 0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b, 0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07, 0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53, 0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35, 0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6, 0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3, 0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40, 0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08, 0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed, 0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7, 0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f, 0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48, 0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68, 0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1, 0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9, 0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72, 0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26, 0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8, 0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25, 0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9, 0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7, 0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7, 0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab, 0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb, 0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54, 0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38, 0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22, 0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde, 0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc, 0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33, 0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13, 0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c, 0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99, 0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48, 0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90, 0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d, 0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d, 0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff, 0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1, 0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f, 0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6, 0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40, 0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48, 0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41, 0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca, 0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07, 0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9, 0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58, 0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39, 0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53, 0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2, 0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2, 0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83, 0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2, 0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54, 0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f, 0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c, 0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f, 0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e, 0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4, 0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf, 0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27, 0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18, 0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e, 0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63, 0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67, 0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda, 0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48, 0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6, 0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d, 0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac, 0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32, 0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7, 0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26, 0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b, 0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2, 0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a, 0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1, 0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24, 0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45, 0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89, 0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82, 0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1, 0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6, 0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3, 0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4, 0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39, 0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58, 0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe, 0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45, 0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5, 0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a, 0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12, 0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c, 0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f, 0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4, 0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04, 0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5, 0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d, 0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab, 0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e, 0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27, 0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f, 0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b, 0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd, 0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3, 0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2, 0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21, 0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff, 0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc, 0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8, 0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb, 0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10, 0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05, 0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23, 0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab, 0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88, 0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83, 0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74, 0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7, 0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18, 0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee, 0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f, 0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97, 0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17, 0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67, 0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b, 0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc, 0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e, 0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2, 0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92, 0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92, 0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b, 0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5, 0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5, 0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a, 0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69, 0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d, 0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14, 0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80, 0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0, 0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76, 0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd, 0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad, 0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61, 0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd, 0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67, 0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35, 0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd, 0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87, 0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda, 0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4, 0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30, 0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c, 0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5, 0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06, 0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f, 0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d, 0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9, 0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62, 0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53, 0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a, 0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0, 0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b, 0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8, 0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a, 0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a, 0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0, 0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9, 0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce, 0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5, 0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09, 0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b, 0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13, 0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24, 0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3, 0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3, 0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f, 0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8, 0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47, 0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d, 0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f, 0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6, 0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e, 0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a, 0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51, 0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e, 0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee, 0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3, 0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99, 0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3, 0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8, 0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf, 0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10, 0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c, 0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c, 0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d, 0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72, 0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0, 0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58, 0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe, 0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13, 0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b, 0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15, 0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85, 0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c, 0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8, 0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77, 0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42, 0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59, 0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32, 0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6, 0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb, 0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48, 0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2, 0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38, 0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f, 0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb, 0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7, 0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06, 0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61, 0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b, 0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29, 0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc, 0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0, 0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37, 0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26, 0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69, 0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba, 0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4, 0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba, 0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f, 0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77, 0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30, 0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70, 0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54, 0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e, 0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e, 0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3, 0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13, 0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda, 0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28, 0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5, 0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98, 0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2, 0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb, 0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83, 0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96, 0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76, 0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b, 0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59, 0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4, 0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9, 0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42, 0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf, 0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1, 0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48, 0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac, 0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11, 0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1, 0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f, 0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5, 0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19, 0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6, 0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19, 0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45, 0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20, 0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d, 0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7, 0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25, 0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42, 0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49, 0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9, 0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8, 0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86, 0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0, 0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc, 0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a, 0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04, 0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c, 0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92, 0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25, 0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11, 0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73, 0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96, 0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67, 0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2, 0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a, 0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e, 0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74, 0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef, 0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e, 0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e, 0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf, 0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3, 0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff, 0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9, 0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8, 0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e, 0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91, 0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f, 0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27, 0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d, 0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3, 0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e, 0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7, 0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f, 0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca, 0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f, 0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19, 0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea, 0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02, 0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04, 0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25, 0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84, 0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9, 0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0, 0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98, 0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51, 0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c, 0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a, 0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a, 0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38, 0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca, 0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b, 0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a, 0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06, 0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3, 0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e, 0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00, 0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b, 0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c, 0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa, 0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d, 0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a, 0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3, 0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58, 0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17, 0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4, 0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b, 0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92, 0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b, 0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98, 0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd, 0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d, 0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9, 0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02, 0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03, 0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07, 0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7, 0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96, 0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68, 0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1, 0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46, 0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37, 0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde, 0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0, 0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e, 0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d, 0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48, 0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43, 0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40, 0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d, 0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9, 0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c, 0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3, 0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34, 0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c, 0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07, 0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c, 0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81, 0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5, 0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde, 0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55, 0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f, 0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff, 0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62, 0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60, 0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4, 0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79, 0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9, 0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32, 0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c, 0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b, 0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a, 0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61, 0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca, 0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae, 0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c, 0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b, 0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e, 0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f, 0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05, 0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9, 0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a, 0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f, 0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34, 0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c, 0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e, 0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b, 0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44, 0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8, 0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b, 0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8, 0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf, 0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4, 0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8, 0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0, 0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59, 0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c, 0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75, 0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd, 0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6, 0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41, 0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9, 0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b, 0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe, 0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1, 0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0, 0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f, 0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf, 0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85, 0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12, 0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82, 0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83, 0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14, 0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c, 0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f, 0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1, 0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d, 0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5, 0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7, 0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85, 0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41, 0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0, 0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca, 0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83, 0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93, 0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6, 0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf, 0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c, 0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73, 0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb, 0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5, 0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24, 0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3, 0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a, 0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8, 0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca, 0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba, 0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a, 0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9, 0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b, 0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b, 0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b, 0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf, 0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69, 0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c, 0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3, 0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15, 0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68, 0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4, 0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2, 0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a, 0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45, 0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8, 0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9, 0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7, 0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34, 0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed, 0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf, 0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5, 0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e, 0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36, 0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40, 0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9, 0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa, 0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3, 0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22, 0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e, 0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89, 0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba, 0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97, 0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4, 0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f, 0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7, 0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6, 0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37, 0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76, 0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02, 0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b, 0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52, 0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f, 0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3, 0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83, 0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+		{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64], { AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+		{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128], { AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+		{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256], { AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+		{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512], { AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+		{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768], { AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+		{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024], { AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+		{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280], { AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+		{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536], { AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+		{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792], { AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+		{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048], { AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_crypto_op_data *cop = rte_crypto_op_alloc(ts_params->crypto_op_mp);
+		TEST_ASSERT_NOT_NULL(cop, "Failed to allocate crypto_op");
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b], data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b], CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_attach_crypto_op(tx_mbufs[b], cop);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC algorithm with "
+			"a constant request size of %u.", data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost (assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		rte_crypto_op_free(tx_mbufs[b]->crypto_op);
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send AES128_CBC_SHA256_HMAC requests "
+		"with a constant burst size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\tMrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext, data_params[index].length, 0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+			struct rte_crypto_op_data *cop = rte_crypto_op_alloc(ts_params->crypto_op_mp);
+			TEST_ASSERT_NOT_NULL(cop, "Failed to allocate crypto_op");
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_attach_crypto_op(tx_mbufs[b], cop);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0, data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			rte_crypto_op_free(tx_mbufs[b]->crypto_op);
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH 6/6] l2fwd-crypto: crypto
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
                   ` (4 preceding siblings ...)
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-10-02 23:01 ` Declan Doherty
  2015-10-21  9:11 ` [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-02 23:01 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1475 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1525 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..c974c9e
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1475 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_crtpto_op_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000; /* default period is 10 seconds */
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_crypto_op_data *c_op,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(c_op, cparams->session);
+
+	/* Append space for digest to end of packet */
+	c_op->digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	c_op->digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	c_op->digest.length = cparams->digest_length;
+
+	c_op->iv.data = cparams->iv_key.data;
+	c_op->iv.phys_addr = cparams->iv_key.phys_addr;
+	c_op->iv.length = cparams->iv_key.length;
+
+	c_op->data.to_cipher.offset = ipdata_offset;
+	c_op->data.to_cipher.length = data_len;
+
+	c_op->data.to_hash.offset = ipdata_offset;
+	c_op->data.to_hash.length = data_len;
+
+	rte_pktmbuf_attach_crypto_op(m, c_op);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_crypto_op_data *c_op;
+
+			portid = qconf->rx_port_list[i];
+
+			if (options->single_lcore)
+				cparams = &port_cparams[0];
+			else
+				cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				c_op = rte_crypto_op_alloc(
+						l2fwd_crtpto_op_pool);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)c_op);
+				l2fwd_simple_crypto_enqueue(m, c_op, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_crypto_op_free(m->crypto_op);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_SYM_HASH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_SYM_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_SYM_HASH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single lcore */
+		case 's':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_crtpto_op_pool = rte_crypto_op_pool_create("crypto_op_pool",
+			NB_MBUF, 128, 2, rte_socket_id());
+	if (l2fwd_crtpto_op_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
+
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH 0/6] Crypto API and device framework
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
                   ` (5 preceding siblings ...)
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 6/6] l2fwd-crypto: crypto Declan Doherty
@ 2015-10-21  9:11 ` Declan Doherty
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-21  9:11 UTC (permalink / raw)
  To: dev

On 03/10/15 00:01, Declan Doherty wrote:
> Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
> Co-authored-by: John Griffin <john.griffin@intel.com>
> Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>
>
> This series of patches defines a set of application burst oriented APIs for
> asynchronous symmetric cryptographic functions within DPDK. It also contains a
> poll mode driver cryptographic device framework for the implementation of
> crypto devices within DPDK.
> ....
>

Hey all,

I'm just looking for any comments on this patch set? I working on a V2 
with some small bug fixes, tidy ups and more documentation. I would 
really like to address any further comments which might be out there 
before submitting, so if you have any comments or you intend to review 
this patch set could you please do so ASAP. I would really like to have 
any comments addressed by the end of this week as I'm out of the office 
for a couple of days early next week.

Thanks
Declan

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-10-21  9:24   ` Thomas Monjalon
  2015-10-21 11:16     ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-10-21  9:24 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

Hi Declan,

2015-10-03 00:01, Declan Doherty:
> Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
> Co-authored-by: John Griffin <john.griffin@intel.com>
> Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

Common practice is to use Signed-off-by below for co-authors.

> This patch contains the initial proposed APIs and device framework for
> integrating crypto packet processing into DPDK.
> 
> features include:
>  - Crypto device configuration / management APIs
>  - Definitions of supported cipher algorithms and operations.
>  - Definitions of supported hash/authentication algorithms and
>    operations.
>  - Crypto session management APIs
>  - Crypto operation data structures and APIs allocation of crypto
>    operation structure used to specify the crypto operations to
>    be performed  on a particular mbuf.
>  - Extension of mbuf to contain crypto operation data pointer and
>    extra flags.
>  - Burst enqueue / dequeue APIs for processing of crypto operations.

It would be easier to review if features were split in separate patches.
You don't need to have a fine grain but maybe 1 patch for basic management
then 1 for the session management, 1 for the algos and another 1 for the stats.

Other comment: you've added some API which are not implemented (hotplug, restore).
Why not declare them later when they will be implemented?

The QuickAssist doc is not needed if the code is not submitted.

Volunteer for a sub-tree?
Thanks.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-10-21  9:24   ` Thomas Monjalon
@ 2015-10-21 11:16     ` Declan Doherty
  0 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-21 11:16 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On 21/10/15 10:24, Thomas Monjalon wrote:
> Hi Declan,
>
> 2015-10-03 00:01, Declan Doherty:
>> Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
>> Co-authored-by: John Griffin <john.griffin@intel.com>
>> Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>
>
> Common practice is to use Signed-off-by below for co-authors.

Cool, I didn't know that, I will change in the V2.

>
>> This patch contains the initial proposed APIs and device framework for
>> integrating crypto packet processing into DPDK.
>>
>> features include:
>>   - Crypto device configuration / management APIs
>>   - Definitions of supported cipher algorithms and operations.
>>   - Definitions of supported hash/authentication algorithms and
>>     operations.
>>   - Crypto session management APIs
>>   - Crypto operation data structures and APIs allocation of crypto
>>     operation structure used to specify the crypto operations to
>>     be performed  on a particular mbuf.
>>   - Extension of mbuf to contain crypto operation data pointer and
>>     extra flags.
>>   - Burst enqueue / dequeue APIs for processing of crypto operations.
>
> It would be easier to review if features were split in separate patches.
> You don't need to have a fine grain but maybe 1 patch for basic management
> then 1 for the session management, 1 for the algos and another 1 for the stats.

I'll take a look and see how feasible it would be to split the patches 
that way.

>
> Other comment: you've added some API which are not implemented (hotplug, restore).
> Why not declare them later when they will be implemented?
>

I'll remove these

> The QuickAssist doc is not needed if the code is not submitted.
>

There is a QuickAssist PMD included in the patch set, see patch 2/6

> Volunteer for a sub-tree?
> Thanks.
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH 4/6] docs: add getting started guides for multi-buffer pmd and qat pmd
  2015-10-02 23:01 ` [dpdk-dev] [PATCH 4/6] docs: add getting started guides for multi-buffer pmd and qat pmd Declan Doherty
@ 2015-10-21 11:34   ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-10-21 11:34 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-10-03 00:01, Declan Doherty:
>  doc/guides/cryptodevs/aesni_mb.rst |  76 ++++++++++++++++++
>  doc/guides/cryptodevs/index.rst    |  43 ++++++++++
>  doc/guides/cryptodevs/qat.rst      | 155 +++++++++++++++++++++++++++++++++++++
>  doc/guides/index.rst               |   1 +

Please avoid separate doc patches.
The good habit is to update doc and code at the same time in the same patch.
The index can be brought by the first API patch.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] Crypto API and device framework
  2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
                   ` (6 preceding siblings ...)
  2015-10-21  9:11 ` [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
@ 2015-10-30 12:59 ` Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                     ` (6 more replies)
  7 siblings, 7 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.


v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to new rte_mbuf_offload structure
 - General bug fixes and code tidy up


Declan Doherty (6):
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 app/test/Makefile                                  |    3 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1924 ++++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 1449 +++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   36 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  188 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   67 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  212 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  790 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  296 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  230 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  173 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 ++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 ++++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  578 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  547 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  113 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  416 +++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  130 ++
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1472 +++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  604 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1065 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  619 +++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  529 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   41 +
 lib/librte_eal/common/include/rte_common.h         |   15 +
 lib/librte_eal/common/include/rte_eal.h            |   14 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |   30 -
 lib/librte_mbuf/rte_mbuf.h                         |   36 +-
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  289 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 57 files changed, 13784 insertions(+), 80 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
@ 2015-10-30 12:59   ` Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  604 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1065 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  619 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  529 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   41 +
 lib/librte_eal/common/include/rte_common.h     |   15 +
 lib/librte_eal/common/include/rte_eal.h        |   14 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 lib/librte_eal/common/include/rte_memory.h     |   14 +-
 lib/librte_ether/rte_ethdev.c                  |   30 -
 lib/librte_mbuf/rte_mbuf.h                     |   30 +-
 mk/rte.app.mk                                  |    1 +
 18 files changed, 3011 insertions(+), 35 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..8ce6af5 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..e7b9b25 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..b434a53
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..bd6748d
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,604 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid. */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto operation
+	 * data structures op_params parameter MUST be set for a session-less
+	 * crypto operation.
+	 * */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location. */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+			  * operation, this field specifies the start of the AAD data in
+			  * the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer that
+			  * the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field should
+			  * be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up for
+		 * the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by the
+		 *   implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth; /**< Additional authentication parameters */
+
+	struct rte_mempool *pool;	/**< mempool used to allocate crypto op */
+
+	void *user_data;		/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..c5610e8
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1065 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;                /**< Callback address */
+	void *cb_arg;                           /**< Parameter for callback */
+	enum rte_cryptodev_event_type event;          /**< Interrupt event type */
+	uint32_t active;                        /**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0 &&
+				rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED)
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE, rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%u device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/* Register PCI driver for physical device intialisation during
+	 * PCI probing */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs, int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+		qp = dev->data->queue_pairs;
+		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release, -ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %" PRIu8 " must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in
+	 * existence */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) + priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if (dev->data->session_pool->elt_size != elt_size ||
+				dev->data->session_pool->cache_size < obj_cache_size ||
+				dev->data->session_pool->size < nb_objs) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different "
+					"initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
+
+
+
+
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..9b6ee14
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,619 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+	uint16_t max_queue_pairs;		/**< Maximum number of queues
+						* pairs supported by device.
+						*/
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;		/**< Accumulated time processing operation */
+	uint64_t t_min;			/**< Max time */
+	uint64_t t_max;			/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf;	/**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;	/**< Number of queue pairs to configure
+					* on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;	/**< Driver for this device */
+	struct rte_cryptodev_data *data;		/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;		/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;			/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;		/**< Crypto device type */
+	enum pmd_type pmd_type;				/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each crypto device.
+ *
+ * This structure is safe to place in shared memory to be common among different
+ * processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;				/**< Device ID for this instance */
+	uint8_t socket_id;			/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;		/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;	/**< Session memory pool */
+	void **queue_pairs;		/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;	/**< Number of device queue pairs. */
+
+	void *dev_private;		/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function until
+ * a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..36191a6
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,529 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call them
+ * directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];	/**< Device private data */
+	uint8_t nb_devs;			/**< Number of devices found */
+	uint8_t max_devs;			/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++) {
+		if (rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED &&
+				strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0)
+			return &rte_cryptodev_globals->devs[i];
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ *  */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ * */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ * */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;		/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;		/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;		/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;		/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;		/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int  socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..979d7eb
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,41 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_virtual_dev_init;
+
+	local: *;
+};
diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h
index 3121314..bae4054 100644
--- a/lib/librte_eal/common/include/rte_common.h
+++ b/lib/librte_eal/common/include/rte_common.h
@@ -270,8 +270,23 @@ rte_align64pow2(uint64_t v)
 		_a > _b ? _a : _b; \
 	})
 
+
 /*********** Other general functions / macros ********/
 
+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return retval; \
+	} \
+} while (0)
+
+#define FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return; \
+	} \
+} while (0)
+
 #ifdef __SSE2__
 #include <emmintrin.h>
 /**
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index f36a792..948cc0a 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -115,6 +115,20 @@ enum rte_lcore_role_t rte_eal_lcore_role(unsigned lcore_id);
  */
 enum rte_proc_type_t rte_eal_process_type(void);
 
+#define PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return retval; \
+	} \
+} while (0)
+
 /**
  * Request iopl privilege for all RPL.
  *
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..40e8d43 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment.
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..16fde77 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -77,36 +77,6 @@
 #define PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
-/* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
 /* Macros to check for valid port */
 #define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
 	if (!rte_eth_dev_is_valid_port(port_id)) {		\
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..710086a 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -98,6 +98,7 @@ extern "C" {
 #define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */
 #define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
 #define PKT_RX_QINQ_PKT      (1ULL << 15)  /**< RX packet with double VLAN stripped. */
+
 /* add new RX flags here */
 
 /* add new TX flags here */
@@ -105,7 +106,7 @@ extern "C" {
 /**
  * Second VLAN insertion (QinQ) flag.
  */
-#define PKT_TX_QINQ_PKT    (1ULL << 49)   /**< TX packet with double VLAN inserted. */
+#define PKT_TX_QINQ_PKT		(1ULL << 49) /**< TX packet with double VLAN inserted. */
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
@@ -1622,6 +1623,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9e1909e..80f68bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] mbuf_offload: library to support attaching offloads to a mbuf
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-10-30 12:59   ` Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper funtions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 289 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 9 files changed, 468 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 8ce6af5..96d9d26 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -320,6 +320,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index e7b9b25..c113c88 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -328,6 +328,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 710086a..aa578e3 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -729,6 +729,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -842,6 +845,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..0a59667
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,289 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = m->offload_ops;
+
+	if (m->offload_ops != NULL && m->offload_ops->type == type)
+		return ol;
+
+	ol = m->offload_ops;
+	while (ol != NULL) {
+		if (ol->type == type)
+			return ol;
+
+		ol = ol->next;
+	}
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 80f68bb..9b4aed3 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -112,6 +112,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-10-30 12:59   ` Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See 
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality, 
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 188 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 173 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 578 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 547 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 113 ++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 416 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 130 +++++
 mk/rte.app.mk                                      |   3 +
 21 files changed, 3552 insertions(+)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 96d9d26..02f10a3 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -155,6 +155,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index c113c88..cae80a5 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..25fb83d
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,188 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC hardware accelerator. QAT PMD has
+current been tested on Fedora 21 64-bit with gcc and on the 4.3 kernel.org
+Linux kernel.
+
+
+Features
+--------
+QAT PMD has support for:
+
+Cipher algorithms:
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for "Installation using
+kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see "Installation using the
+01.org QAT driver".
+
+Installation using 01.org QAT driver
+------------------------------------
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+Compiling the 01.org driver - notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
+Installation using kernel.org driver
+------------------------------------
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+Steps below assume
+  * running DPDK on a platform with one DH895xCC device
+  * on a kernel at least version 4.3
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing:
+    lsmod | grep qat
+You should see the following output:
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+First find the bdf of the DH895xCC device:
+    lspci -d :435
+You should see output similar to:
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs:
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use "lspci -d:443" to confirm
+the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:03:0${device}.${fn} > /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
+
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..9529f30
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..d2b79c6
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,173 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) = (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..cc96d45
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..7671465
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d8fe38
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..fb3a685
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..9fddf26
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,578 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+	* Redistributions of source code must retain the above copyright
+	  notice, this list of conditions and the following disclaimer.
+	* Redistributions in binary form must reproduce the above copyright
+	  notice, this list of conditions and the following disclaimer in
+	  the documentation and/or other materials provided with the
+	  distribution.
+	* Neither the name of Intel Corporation nor the names of its
+	  contributors may be used to endorse or promote products derived
+	  from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/* returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+				memset(out - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ, 0,
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	} else {
+		rte_memcpy(ipad, auth_key, auth_keylen);
+		rte_memcpy(opad, auth_key, auth_keylen);
+	}
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/* state len is a multiple of 8, so may be larger than the digest.
+	   Put the partial hash of opad state_len bytes after state1 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(cdesc->qat_mode,
+			cdesc->qat_cipher_alg, key_convert, cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/* write (the length of AAD) into bytes 16-19 of state2
+		* in big-endian format. This field is 8 bytes */
+		*(uint32_t *)&(hash->sha.state1[ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					 ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1), &state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..b372229
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,547 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift);
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+				return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+						resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+					queue->msg_size,
+					ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr + queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length = qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr = qat_req->comn_mid.src_data_addr
+							= rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <= sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array, ol->op.crypto.iv.data,
+				ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(qat_req->comn_hdr.serv_specif_flags,
+					ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+							ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..64437e3
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,113 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*	This macro rounds up a number to a be a multiple of
+ *	the alignment when the alignment is a power of 2    */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;	 /* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..bc9c637
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,416 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2 /* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		} else {
+			PMD_DRV_LOG(ERR, "Incompatible memzone already "
+					"allocated %s, size %u, socket %d. "
+					"Requested size %u, socket %u",
+					queue_name, (uint32_t)mz->len,
+					mz->socket_id, queue_size, socket_id);
+			return NULL;
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space (UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >= (ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (NULL != mz)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes, socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr, queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n", queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size)) != 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..63cb5fc
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.0 {
+	local: *;
+};
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..49a936f
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,130 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
+
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9b4aed3..5d960cd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -145,6 +145,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
                     ` (2 preceding siblings ...)
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-10-30 12:59   ` Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC
 
Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   6 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 ++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  67 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 212 ++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 790 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 296 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 230 ++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 12 files changed, 1693 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 02f10a3..5a177c3 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index cae80a5..c1054e8 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,12 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 9529f30..26325b0 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..62f51ce
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# export include files
+SYMLINK-y-include +=
+
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..3d15a68
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,212 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+#include <gcm_defines.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_128_enc_t)(void *key, void *enc_exp_keys);
+typedef void (*aes_keyexp_192_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)(void *key, void *exp_k1, void *k2, void *k3);
+
+typedef void (*aesni_gcm_t)(gcm_data *my_ctx_data, u8 *out, const u8 *in,
+		u64 plaintext_len, u8 *iv, const u8 *aad, u64 aad_len,
+		u8 *auth_tag, u64 auth_tag_len);
+
+typedef void (*aesni_gcm_precomp_t)(gcm_data *my_ctx_data, u8 *hash_subkey);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;		/**< Initialise scheduler  */
+		get_next_job_t get_next;	/**< Get next free job structure */
+		submit_job_t submit;		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job; /**< Get completed job */
+		flush_job_t flush_job;		/**< flush jobs from manager */
+	} job; /**< multi buffer manager functions */
+	struct {
+		struct {
+			md5_one_block_t md5;		/**< MD5 one block hash */
+			sha1_one_block_t sha1;		/**< SHA1 one block hash */
+			sha224_one_block_t sha224;	/**< SHA224 one block hash */
+			sha256_one_block_t sha256;	/**< SHA256 one block hash */
+			sha384_one_block_t sha384;	/**< SHA384 one block hash */
+			sha512_one_block_t sha512;	/**< SHA512 one block hash */
+		} one_block; /**< one block hash functions */
+		struct {
+			aes_keyexp_128_t aes128;	/**< AES128 key expansions */
+			aes_keyexp_128_enc_t aes128_enc;/**< AES128 enc key expansion */
+			aes_keyexp_192_t aes192;	/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;	/**< AES256 key expansions */
+			aes_xcbc_expand_key_t aes_xcbc;	/**< AES XCBC key expansions */
+		} keyexp;	/**< Key expansion functions */
+	} aux; /**< Auxiliary functions */
+	struct {
+
+		aesni_gcm_t enc;		/**< MD5 encode */
+		aesni_gcm_t dec;		/**< GCM decode */
+		aesni_gcm_precomp_t precomp;	/**< GCM pre-compute */
+	} gcm; /**< GCM functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = { NULL },
+			.aux = {
+				.one_block = { NULL },
+				.keyexp = { NULL }
+			},
+			.gcm = { NULL
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_128_enc_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			},
+			.gcm = {
+				aesni_gcm_enc_sse,
+				aesni_gcm_dec_sse,
+				aesni_gcm_precomp_sse
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+				.job = {
+					init_mb_mgr_avx,
+					get_next_job_avx,
+					submit_job_avx,
+					get_completed_job_avx,
+					flush_job_avx
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx,
+						sha1_one_block_avx,
+						sha224_one_block_avx,
+						sha256_one_block_avx,
+						sha384_one_block_avx,
+						sha512_one_block_avx
+					},
+					.keyexp = {
+						aes_keyexp_128_avx,
+						aes_keyexp_128_enc_avx,
+						aes_keyexp_192_avx,
+						aes_keyexp_256_avx,
+						aes_xcbc_expand_key_avx
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen2,
+					aesni_gcm_dec_avx_gen2,
+					aesni_gcm_precomp_avx_gen2
+				}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+				.job = {
+					init_mb_mgr_avx2,
+					get_next_job_avx2,
+					submit_job_avx2,
+					get_completed_job_avx2,
+					flush_job_avx2
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx2,
+						sha1_one_block_avx2,
+						sha224_one_block_avx2,
+						sha256_one_block_avx2,
+						sha384_one_block_avx2,
+						sha512_one_block_avx2
+					},
+					.keyexp = {
+						aes_keyexp_128_avx2,
+						aes_keyexp_128_enc_avx2,
+						aes_keyexp_192_avx2,
+						aes_keyexp_256_avx2,
+						aes_xcbc_expand_key_avx2
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen4,
+					aesni_gcm_dec_avx_gen4,
+					aesni_gcm_precomp_avx_gen4
+				}
+		},
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..e469f6d
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,790 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/* multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess, cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+aesni_mb_get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->mb_ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_mb_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->mb_ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/* Multiple buffer library current only support returning a truncated digest length
+	 * as specified in the relevant IPsec RFCs */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ *
+ */
+static int
+process_gcm_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	uint8_t *src, *dst;
+
+	src = rte_pktmbuf_mtod(m, uint8_t *) + c_op->data.to_cipher.offset;
+	dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	if (session->cipher.direction == ENCRYPT) {
+
+		(*qp->mb_ops->gcm.enc)(&session->gdata, dst, src,
+				(uint64_t)c_op->data.to_cipher.length,
+				c_op->iv.data,
+				c_op->additional_auth.data,
+				(uint64_t)c_op->additional_auth.length,
+				c_op->digest.data,
+				(uint64_t)c_op->digest.length);
+	} else {
+		uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(m,
+				c_op->digest.length);
+
+		if (!auth_tag)
+			return -1;
+
+		(*qp->mb_ops->gcm.dec)(&session->gdata, dst, src,
+				(uint64_t)c_op->data.to_cipher.length,
+				c_op->iv.data,
+				c_op->additional_auth.data,
+				(uint64_t)c_op->additional_auth.length,
+				auth_tag,
+				(uint64_t)c_op->digest.length);
+	}
+
+	return 0;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		else
+			c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	} else {
+		c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_mb_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->mb_ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_gcm_crypto_op(struct rte_mbuf *m, struct rte_crypto_op *c_op)
+{
+	struct aesni_mb_session *session =
+			(struct aesni_mb_session *)c_op->session->_private;
+
+	/* Verify digest if required */
+	if (session->cipher.direction == DECRYPT) {
+
+		uint8_t *auth_tag = rte_pktmbuf_mtod_offset(m, uint8_t *,
+				m->data_len - c_op->digest.length);
+
+		if (memcmp(auth_tag, c_op->digest.data,
+				c_op->digest.length) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		else
+			c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, c_op->digest.length);
+	} else {
+		c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed GCM request
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_gcm_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op)
+{
+	m = post_process_gcm_crypto_op(m, c_op);
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	rte_ring_enqueue(qp->processed_pkts, (void *)m);
+
+	return 0;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *c_op;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+	JOB_AES_HMAC *job = NULL;
+
+	int i, retval, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+		c_op = &ol->op.crypto;
+
+		sess = aesni_mb_get_session(qp, c_op);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		if (sess->gcm_session) {
+			retval = process_gcm_crypto_op(qp, bufs[i], c_op, sess);
+			if (retval < 0) {
+				qp->qp_stats.enqueue_err_count++;
+				goto flush_jobs;
+			}
+
+			handle_completed_gcm_crypto_op(qp, bufs[i], c_op);
+			processed_jobs++;
+		} else {
+			job = process_mb_crypto_op(qp, bufs[i], c_op, sess);
+			if (unlikely(job == NULL)) {
+				qp->qp_stats.enqueue_err_count++;
+				goto flush_jobs;
+			}
+
+			/* Submit Job */
+			job = (*qp->mb_ops->job.submit)(&qp->mb_mgr);
+
+			/* If submit returns a processed job then handle it,
+			 * before submitting subsequent jobs */
+			if (job)
+				processed_jobs +=
+					handle_completed_mb_jobs(qp, job);
+		}
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/* if we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling */
+	job = (*qp->mb_ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count +=
+				handle_completed_mb_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_qpairs = AESNI_MB_MAX_NB_QUEUE_PAIRS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..41b8d04
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,296 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_queue_pairs = internals->max_nb_qpairs;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->mb_ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->mb_ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/* Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
+
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..e5e317b
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,230 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define AESNI_MB_NAME_MAX_LENGTH	(64)
+#define AESNI_MB_MAX_NB_QUEUE_PAIRS	(4)
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+
+	unsigned max_nb_qpairs;
+};
+
+struct aesni_mb_qp {
+	uint16_t id;				/**< Queue Pair Identifier */
+	char name[AESNI_MB_NAME_MAX_LENGTH];	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *mb_ops;	/**< Architecture dependent
+						 * function pointer table of
+						 * the multi-buffer APIs */
+	MB_MGR mb_mgr;				/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;	/**< Ring for placing process packets */
+
+	struct rte_mempool *sess_mp;		/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	unsigned gcm_session:1;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1)) */
+	} cipher;
+
+	union {
+		/** Authentication Parameters */
+		struct {
+			JOB_HASH_ALG algo; /**< Authentication Algorithm */
+			union {
+				struct {
+					uint8_t inner[128] __rte_aligned(16);
+					/**< inner pad */
+					uint8_t outer[128] __rte_aligned(16);
+					/**< outer pad */
+				} pads;
+				/**< HMAC Authentication pads -
+				 * allocating space for the maximum pad
+				 * size supported which is 128 bytes for
+				 * SHA512 */
+
+				struct {
+				    uint32_t k1_expanded[44] __rte_aligned(16);
+				    /**< k1 (expanded key). */
+				    uint8_t k2[16] __rte_aligned(16);
+				    /**< k2. */
+				    uint8_t k3[16] __rte_aligned(16);
+				    /**< k3. */
+				} xcbc;
+				/**< Expanded XCBC authentication keys */
+			};
+		} auth;
+
+		/** GCM parameters */
+		struct gcm_data gdata;
+	};
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
+
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d960cd..6255d4e 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -148,6 +148,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] app/test: add cryptodev unit and performance tests
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
                     ` (3 preceding siblings ...)
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-10-30 12:59   ` Declan Doherty
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 6/6] l2fwd-crypto: crypto Declan Doherty
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
---
 app/test/Makefile                  |    3 +
 app/test/test.c                    |   92 +-
 app/test/test.h                    |   34 +-
 app/test/test_cryptodev.c          | 1924 ++++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h          |   68 ++
 app/test/test_cryptodev_perf.c     | 1449 +++++++++++++++++++++++++++
 app/test/test_link_bonding.c       |    6 +-
 app/test/test_link_bonding_mode4.c |    7 +-
 8 files changed, 3538 insertions(+), 45 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/app/test/Makefile b/app/test/Makefile
index 294618f..b7de576 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -140,6 +140,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding_mode4.c
 endif
 
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
diff --git a/app/test/test.c b/app/test/test.c
index e8992f4..e58f266 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..8b4a05f
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1924 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+	
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create("CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type) {
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+		}
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		/* Since we can't free and re-allocate queue memory always set the
+		 * queues on this device up to max size first so enough memory is
+		 * allocated for any later re-configures needed by other tests */
+
+		ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs =
+				(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+						RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+						RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < MAX_NUM_QPS_PER_QAT_DEVICE; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/* Now reconfigure queues to size we actually want to use in this
+	 * test suite. */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/* free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count -1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u, invalid qps: %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+				"Failed test for rte_cryptodev_configure, dev_id %u, invalid qps: %u",
+				ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0], &ts_params->conf),
+				"Failed test for rte_cryptodev_configure, dev_id %u, invalid qps: %u",
+				ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	ts_params->conf.session_mp.nb_objs = RTE_LIBRTE_PMD_QAT_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/* Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created. */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2); /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Failed test for rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id, ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE; 		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+static const uint8_t catch_22_480_bytes_SHA1_digest[] = {
+	0xae, 0xd5, 0x60, 0x7e, 0xf5, 0x37, 0xe2, 0xf6,
+	0x28, 0x68, 0x71, 0x91, 0xab, 0x3d, 0x34, 0xba,
+	0x20, 0xb4, 0x57, 0x05 };
+
+static const uint8_t catch_22_512_bytes_HMAC_SHA1_digest[] = {
+	0xc5, 0x1a, 0x08, 0x57, 0x3e, 0x52, 0x59, 0x75,
+	0xa5, 0x2b, 0xb9, 0xef, 0x66, 0xfc, 0xc3, 0x3b,
+	0xf0, 0xa8, 0x46, 0xbd };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+static const uint8_t catch_22_512_bytes_SHA244_digest[] = {
+	0x35, 0x86, 0x49, 0x1e, 0xdb, 0xaa, 0x9b, 0x6e,
+	0xab, 0x45, 0x19, 0xe0, 0x71, 0xae, 0xa6, 0x6b,
+	0x62, 0x46, 0x72, 0x7b, 0x3d, 0x40, 0x78, 0x25,
+	0x58, 0xde, 0xdf, 0xd0 };
+
+static const uint8_t catch_22_512_bytes_HMAC_SHA244_digest[] = {
+	0x5d, 0x4c, 0xba, 0xcc, 0x1f, 0x6e, 0x94, 0x19,
+	0xb7, 0xe4, 0x2b, 0x5f, 0x20, 0x80, 0xc7, 0xb8,
+	0x14, 0x8c, 0x6d, 0x66, 0xaa, 0xc7, 0x3d, 0x48,
+	0x68, 0x0b, 0xe4, 0x85 };
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+static const uint8_t catch_22_quote_2_1Kb_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31, 0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E, 0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E, 0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0, 0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57, 0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9, 0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D, 0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46, 0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80, 0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5, 0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2, 0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA, 0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4, 0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4, 0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54, 0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91, 0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF, 0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28, 0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7, 0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6, 0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C, 0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6, 0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6, 0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87, 0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B, 0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53, 0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26, 0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36, 0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E, 0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A, 0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4, 0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1, 0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C,
+	0XB9, 0X9F, 0X8B, 0X21, 0XC6, 0X44, 0X3F, 0XB1, 0X2A, 0XA0, 0X63, 0X9E, 0X3F, 0X26, 0X21, 0X64,
+	0X62, 0XE3, 0X54, 0X71, 0X6D, 0XE7, 0X1C, 0X10, 0X72, 0X72, 0XBB, 0X93, 0X75, 0XA0, 0X79, 0X3E,
+	0X7B, 0X6F, 0XDA, 0XF7, 0X52, 0X45, 0X4C, 0X5B, 0XF6, 0X01, 0XAD, 0X2D, 0X50, 0XBE, 0X34, 0XEE,
+	0X67, 0X10, 0X73, 0X68, 0X3D, 0X00, 0X3B, 0XD5, 0XA3, 0X8E, 0XC8, 0X9D, 0X41, 0X66, 0X0D, 0XB5,
+	0X5B, 0X93, 0X50, 0X2F, 0XBD, 0X27, 0X5C, 0XAE, 0X01, 0X8B, 0XE4, 0XB1, 0X08, 0XDD, 0XD3, 0X16,
+	0X0F, 0XFE, 0XA2, 0X40, 0X64, 0X5C, 0XE5, 0XBB, 0X3A, 0X51, 0X12, 0X27, 0XAB, 0X04, 0X4E, 0X36,
+	0XD1, 0XC4, 0X4E, 0X44, 0XF6, 0XD1, 0XFE, 0X0E, 0X3A, 0XEA, 0X9B, 0X0E, 0X76, 0XB8, 0X42, 0X68,
+	0X53, 0XD4, 0XFA, 0XBD, 0XEC, 0XD8, 0X81, 0X5D, 0X6D, 0XB7, 0X5A, 0XDF, 0X33, 0X60, 0XBB, 0X91,
+	0XBC, 0X1C, 0X1D, 0X74, 0XEA, 0X21, 0XE8, 0XF9, 0X85, 0X9E, 0XB3, 0X86, 0XB2, 0X3C, 0X73, 0X2F,
+	0X70, 0XBB, 0XBB, 0X92, 0XC4, 0XDB, 0XF4, 0X0D, 0XF8, 0X26, 0X4A, 0X30, 0X05, 0X8A, 0X78, 0X94,
+	0X0D, 0X76, 0XC2, 0XB3, 0XFF, 0X27, 0X6C, 0X3E, 0X6D, 0XFD, 0XB7, 0XA8, 0X1E, 0X7E, 0X22, 0X57,
+	0X63, 0XAF, 0X17, 0X36, 0X97, 0X5E, 0XEA, 0X22, 0X1F, 0XD1, 0X1C, 0X1D, 0X69, 0XC7, 0X1D, 0X4E,
+	0X6F, 0X44, 0X5B, 0XD0, 0X8D, 0X97, 0XE4, 0X68, 0X0A, 0XB2, 0X4E, 0X9D, 0X7D, 0X3C, 0X0A, 0X28,
+	0X81, 0X69, 0X77, 0X0C, 0X97, 0X0C, 0X62, 0X6E, 0X41, 0X1D, 0XE8, 0XEC, 0XFB, 0X07, 0X00, 0X3D,
+	0XD5, 0XBB, 0XAB, 0X9F, 0XFC, 0X9F, 0X49, 0XC9, 0XD2, 0XC9, 0XE6, 0XBB, 0X22, 0XA9, 0X61, 0X3A,
+	0X6B, 0X3C, 0XDA, 0XFD, 0XC9, 0X67, 0X3A, 0XAF, 0X53, 0X9B, 0XFA, 0X13, 0X68, 0XB5, 0XB1, 0XBD,
+	0XAC, 0X91, 0XBA, 0X3F, 0X6F, 0X82, 0X81, 0XE8, 0X1B, 0X47, 0XC4, 0XE4, 0X2D, 0X23, 0X92, 0X45,
+	0X96, 0XDA, 0X96, 0X49, 0X7D, 0XF9, 0X29, 0X2C, 0X02, 0X9E, 0XD2, 0X43, 0X45, 0X18, 0XA2, 0X13,
+	0X00, 0X93, 0X77, 0X38, 0XB8, 0X93, 0XAB, 0X1A, 0XB9, 0X64, 0XD5, 0X15, 0X3C, 0X04, 0X28, 0X6D,
+	0X66, 0X58, 0XF2, 0X20, 0XB1, 0XD7, 0X10, 0XB5, 0X14, 0XB5, 0XBF, 0X9E, 0XA8, 0X75, 0X47, 0X3C,
+	0X8C, 0XAA, 0XC9, 0X0F, 0X81, 0X79, 0X62, 0XCB, 0X64, 0X95, 0X32, 0X63, 0X16, 0XCD, 0X5D, 0X01,
+	0XF7, 0X3C, 0X1F, 0X69, 0XD8, 0X0F, 0XC6, 0X70, 0X19, 0X35, 0X76, 0XEB, 0XE4, 0XFE, 0XEA, 0XF3,
+	0X81, 0X78, 0XCD, 0XCD, 0XBA, 0X91, 0XE2, 0XDF, 0X73, 0X39, 0X5F, 0X1E, 0X7D, 0X2B, 0XEE, 0X64,
+	0X33, 0X9B, 0XB1, 0X9D, 0X1F, 0X73, 0X3D, 0XDC, 0XA9, 0X35, 0XB6, 0XC6, 0XAF, 0XE2, 0X97, 0X29,
+	0X38, 0XEE, 0X38, 0X26, 0X52, 0X98, 0X17, 0X76, 0XA3, 0X4B, 0XAF, 0X7D, 0XD0, 0X2D, 0X43, 0X52,
+	0XAD, 0X58, 0X4F, 0X0A, 0X6B, 0X4F, 0X10, 0XB9, 0X38, 0XAB, 0X3A, 0XD5, 0X77, 0XAE, 0X83, 0XF3,
+	0X8C, 0X48, 0X1A, 0XC6, 0X61, 0XCF, 0XE5, 0XA6, 0X2B, 0X5B, 0X60, 0X94, 0XFB, 0X04, 0X34, 0XFC,
+	0X0F, 0X67, 0X1F, 0XFE, 0X42, 0X0E, 0XE1, 0X58, 0X2B, 0X04, 0X11, 0XEB, 0X83, 0X74, 0X06, 0XC5,
+	0XEF, 0X83, 0XA5, 0X40, 0XCB, 0X69, 0X18, 0X7E, 0XDB, 0X71, 0XBF, 0XC2, 0XFA, 0XEF, 0XF5, 0XB9,
+	0X03, 0XF1, 0XF8, 0X78, 0X7F, 0X71, 0XE3, 0XBB, 0XDE, 0XF3, 0XC3, 0X03, 0X29, 0X9A, 0XBF, 0XD6,
+	0XCD, 0XA7, 0X35, 0XD5, 0XE8, 0X88, 0XAE, 0X89, 0XCE, 0X4B, 0X93, 0X4E, 0X04, 0X02, 0X41, 0X86,
+	0X7F, 0X4A, 0X96, 0X23, 0X19, 0X6D, 0XD1, 0X2C, 0X9C, 0X7A, 0X2C, 0X3B, 0XD6, 0X98, 0X7B, 0X4C
+};
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31, 0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E, 0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E, 0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0, 0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57, 0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9, 0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D, 0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46, 0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80, 0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5, 0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2, 0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA, 0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4, 0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4, 0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54, 0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91, 0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF, 0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28, 0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7, 0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6, 0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C, 0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6, 0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6, 0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87, 0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B, 0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53, 0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26, 0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36, 0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E, 0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A, 0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4, 0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1, 0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf,
+			QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf,
+			QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+							CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params)
+			== TEST_SUCCESS, "Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0], ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, catch_22_quote,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+                        CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+                        ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600, &stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats) == -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned nb_sessions = gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD ?
+			RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+			RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+	struct rte_cryptodev_session *sessions[nb_sessions + 1];
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i], "Session creation failed at session number %u", i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i], "Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0], sessions[i]);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol, "Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_testsuite  = {
+	.suite_name = "Crypto Device AESNI Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_testsuite);
+}
+
+static struct test_command cryptodev_aesni_cmd = {
+	.command = "cryptodev_aesni_autotest",
+	.callback = test_cryptodev_aesni,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..1f9e1a2
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,1449 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/* Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests */
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50, 0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36, 0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c, 0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6, 0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4, 0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d, 0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d, 0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5, 0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99, 0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6, 0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e, 0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97, 0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa, 0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36, 0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41, 0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7, 0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85, 0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5, 0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6, 0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe, 0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1, 0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e, 0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07, 0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26, 0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80, 0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76, 0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7, 0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf, 0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70, 0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26, 0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30, 0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64, 0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb, 0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c, 0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a, 0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19, 0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23, 0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf, 0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe, 0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d, 0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed, 0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36, 0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7, 0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3, 0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e, 0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49, 0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf, 0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12, 0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76, 0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08, 0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b, 0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9, 0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c, 0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77, 0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb, 0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86, 0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e, 0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b, 0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11, 0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c, 0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69, 0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1, 0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06, 0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e, 0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08, 0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe, 0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f, 0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23, 0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26, 0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c, 0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7, 0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb, 0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2, 0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38, 0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02, 0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28, 0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03, 0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07, 0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61, 0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef, 0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90, 0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0, 0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3, 0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0, 0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0, 0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7, 0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9, 0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d, 0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6, 0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43, 0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1, 0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d, 0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf, 0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b, 0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07, 0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53, 0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35, 0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6, 0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3, 0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40, 0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08, 0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed, 0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7, 0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f, 0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48, 0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68, 0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1, 0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9, 0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72, 0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26, 0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8, 0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25, 0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9, 0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7, 0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7, 0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab, 0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb, 0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54, 0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38, 0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22, 0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde, 0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc, 0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33, 0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13, 0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c, 0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99, 0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48, 0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90, 0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d, 0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d, 0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff, 0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1, 0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f, 0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6, 0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40, 0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48, 0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41, 0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca, 0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07, 0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9, 0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58, 0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39, 0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53, 0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2, 0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2, 0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83, 0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2, 0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54, 0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f, 0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c, 0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f, 0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e, 0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4, 0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf, 0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27, 0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18, 0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e, 0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63, 0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67, 0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda, 0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48, 0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6, 0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d, 0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac, 0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32, 0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7, 0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26, 0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b, 0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2, 0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a, 0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1, 0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24, 0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45, 0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89, 0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82, 0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1, 0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6, 0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3, 0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4, 0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39, 0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58, 0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe, 0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45, 0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5, 0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a, 0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12, 0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c, 0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f, 0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4, 0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04, 0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5, 0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d, 0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab, 0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e, 0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27, 0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f, 0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b, 0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd, 0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3, 0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2, 0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21, 0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff, 0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc, 0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8, 0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb, 0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10, 0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05, 0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23, 0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab, 0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88, 0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83, 0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74, 0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7, 0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18, 0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee, 0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f, 0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97, 0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17, 0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67, 0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b, 0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc, 0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e, 0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2, 0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92, 0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92, 0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b, 0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5, 0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5, 0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a, 0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69, 0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d, 0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14, 0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80, 0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0, 0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76, 0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd, 0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad, 0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61, 0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd, 0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67, 0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35, 0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd, 0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87, 0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda, 0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4, 0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30, 0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c, 0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5, 0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06, 0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f, 0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d, 0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9, 0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62, 0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53, 0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a, 0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0, 0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b, 0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8, 0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a, 0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a, 0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0, 0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9, 0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce, 0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5, 0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09, 0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b, 0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13, 0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24, 0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3, 0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3, 0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f, 0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8, 0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47, 0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d, 0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f, 0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6, 0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e, 0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a, 0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51, 0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e, 0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee, 0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3, 0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99, 0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3, 0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8, 0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf, 0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10, 0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c, 0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c, 0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d, 0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72, 0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0, 0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58, 0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe, 0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13, 0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b, 0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15, 0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85, 0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c, 0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8, 0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77, 0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42, 0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59, 0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32, 0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6, 0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb, 0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48, 0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2, 0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38, 0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f, 0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb, 0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7, 0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06, 0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61, 0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b, 0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29, 0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc, 0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0, 0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37, 0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26, 0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69, 0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba, 0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4, 0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba, 0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f, 0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77, 0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30, 0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70, 0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54, 0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e, 0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e, 0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3, 0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13, 0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda, 0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28, 0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5, 0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98, 0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2, 0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb, 0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83, 0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96, 0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76, 0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b, 0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59, 0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4, 0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9, 0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42, 0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf, 0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1, 0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48, 0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac, 0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11, 0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1, 0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f, 0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5, 0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19, 0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6, 0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19, 0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45, 0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20, 0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d, 0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7, 0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25, 0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42, 0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49, 0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9, 0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8, 0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86, 0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0, 0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc, 0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a, 0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04, 0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c, 0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92, 0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25, 0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11, 0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73, 0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96, 0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67, 0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2, 0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a, 0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e, 0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74, 0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef, 0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e, 0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e, 0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf, 0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3, 0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff, 0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9, 0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8, 0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e, 0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91, 0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f, 0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27, 0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d, 0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3, 0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e, 0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7, 0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f, 0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca, 0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f, 0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19, 0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea, 0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02, 0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04, 0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25, 0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84, 0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9, 0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0, 0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98, 0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51, 0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c, 0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a, 0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a, 0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38, 0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca, 0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b, 0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a, 0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06, 0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3, 0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e, 0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00, 0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b, 0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c, 0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa, 0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d, 0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a, 0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3, 0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58, 0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17, 0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4, 0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b, 0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92, 0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b, 0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98, 0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd, 0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d, 0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9, 0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02, 0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03, 0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07, 0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7, 0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96, 0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68, 0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1, 0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46, 0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37, 0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde, 0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0, 0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e, 0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d, 0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48, 0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43, 0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40, 0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d, 0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9, 0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c, 0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3, 0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34, 0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c, 0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07, 0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c, 0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81, 0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5, 0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde, 0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55, 0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f, 0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff, 0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62, 0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60, 0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4, 0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79, 0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9, 0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32, 0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c, 0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b, 0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a, 0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61, 0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca, 0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae, 0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c, 0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b, 0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e, 0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f, 0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05, 0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9, 0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a, 0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f, 0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34, 0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c, 0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e, 0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b, 0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44, 0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8, 0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b, 0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8, 0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf, 0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4, 0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8, 0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0, 0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59, 0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c, 0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75, 0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd, 0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6, 0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41, 0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9, 0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b, 0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe, 0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1, 0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0, 0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f, 0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf, 0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85, 0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12, 0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82, 0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83, 0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14, 0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c, 0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f, 0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1, 0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d, 0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5, 0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7, 0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85, 0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41, 0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0, 0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca, 0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83, 0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93, 0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6, 0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf, 0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c, 0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73, 0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb, 0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5, 0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24, 0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3, 0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a, 0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8, 0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca, 0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba, 0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a, 0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9, 0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b, 0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b, 0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b, 0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf, 0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69, 0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c, 0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3, 0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15, 0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68, 0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4, 0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2, 0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a, 0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45, 0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8, 0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9, 0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7, 0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34, 0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed, 0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf, 0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5, 0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e, 0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36, 0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40, 0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9, 0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa, 0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3, 0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22, 0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e, 0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89, 0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba, 0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97, 0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4, 0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f, 0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7, 0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6, 0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37, 0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76, 0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02, 0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b, 0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52, 0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f, 0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3, 0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83, 0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+		{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64], { AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+		{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128], { AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+		{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256], { AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+		{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512], { AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+		{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768], { AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+		{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024], { AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+		{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280], { AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+		{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536], { AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+		{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792], { AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+		{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048], { AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b], data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b], CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC algorithm with "
+			"a constant request size of %u.", data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost (assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send AES128_CBC_SHA256_HMAC requests "
+		"with a constant burst size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\tMrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext, data_params[index].length, 0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0, data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] l2fwd-crypto: crypto
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
                     ` (4 preceding siblings ...)
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-10-30 12:59   ` Declan Doherty
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 12:59 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1472 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1522 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..9fd8bc5
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1472 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000; /* default period is 10 seconds */
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
+
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework
  2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
                     ` (5 preceding siblings ...)
  2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 6/6] l2fwd-crypto: crypto Declan Doherty
@ 2015-10-30 16:08   ` Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                       ` (6 more replies)
  6 siblings, 7 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.


v3:
 - Fixes a document build error, which I missed in the V2
 - Fixes for remaining checkpatch errors
 - Disables QAT and AESNI_MB PMD being build by default as they have external 
   library dependences 

v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to new rte_mbuf_offload structure
 - General bug fixes and code tidy up

Declan Doherty (6):
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 app/test/Makefile                                  |    3 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1968 ++++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 1449 ++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   36 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  195 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   67 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  212 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  790 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  296 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  230 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  174 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 ++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  597 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  559 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  117 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  429 +++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  130 ++
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1472 +++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  610 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1077 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  647 +++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  543 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   41 +
 lib/librte_eal/common/include/rte_common.h         |   15 +
 lib/librte_eal/common/include/rte_eal.h            |   14 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |   30 -
 lib/librte_mbuf/rte_mbuf.h                         |   33 +
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  282 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 57 files changed, 13935 insertions(+), 79 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
@ 2015-10-30 16:08     ` Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  610 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1077 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  647 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  543 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   41 +
 lib/librte_eal/common/include/rte_common.h     |   15 +
 lib/librte_eal/common/include/rte_eal.h        |   14 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 lib/librte_eal/common/include/rte_memory.h     |   14 +-
 lib/librte_ether/rte_ethdev.c                  |   30 -
 lib/librte_mbuf/rte_mbuf.h                     |   27 +
 mk/rte.app.mk                                  |    1 +
 18 files changed, 3069 insertions(+), 34 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..8ce6af5 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..e7b9b25 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..81fa3fc
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
\ No newline at end of file
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..c0ef92b
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,610 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid. */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both
+	 * keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto
+	 * operation data structures op_params parameter MUST be set for a
+	 * session-less crypto operation.
+	 * */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;
+	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;
+	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location. */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC)
+			  * mode of operation, this field specifies the start
+			  * of the AAD data in the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source
+			  * buffer that the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up
+		 * for the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by
+		 *   the implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth;
+	/**< Additional authentication parameters */
+
+	struct rte_mempool *pool;
+	/**< mempool used to allocate crypto op */
+
+	void *user_data;
+	/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..57b5674
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1077 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;		/**< Callback address */
+	void *cb_arg;				/**< Parameter for callback */
+	enum rte_cryptodev_event_type event;	/**< Interrupt event type */
+	uint32_t active;			/**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if ((strcmp(rte_cryptodev_globals->devs[i].data->name, name)
+				== 0) &&
+				(rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED))
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached ==
+				RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV,
+			rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket(
+						"cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/* Register PCI driver for physical device intialisation during
+	 * PCI probing */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
+		int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u",
+							nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+
+		qp = dev->data->queue_pairs;
+
+		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release,
+				-ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in
+	 * existence */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) +
+			priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if ((dev->data->session_pool->elt_size != elt_size) ||
+				(dev->data->session_pool->cache_size <
+				obj_cache_size) ||
+				(dev->data->session_pool->size < nb_objs)) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different"
+					" initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..fe636ac
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,647 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+	uint16_t max_queue_pairs;		/**< Maximum number of queues
+						* pairs supported by device.
+						*/
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;	/**< Accumulated time processing operation */
+	uint64_t t_min;		/**< Max time */
+	uint64_t t_max;		/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;
+	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;
+	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;
+	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;
+	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf; /**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;	/**< Number of queue pairs to configure
+					* on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the
+ *				device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;
+	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;
+	/**< Driver for this device */
+	struct rte_cryptodev_data *data;
+	/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;
+	/**< Crypto device type */
+	enum pmd_type pmd_type;
+	/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t socket_id;
+	/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;
+	/**< Session memory pool */
+	void **queue_pairs;
+	/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;
+	/**< Number of device queue pairs. */
+
+	void *dev_private;
+	/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function
+ * until a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..6532ba8
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,543 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];
+	/**< Device private data */
+	uint8_t nb_devs;		/**< Number of devices found */
+	uint8_t max_devs;		/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	struct rte_cryptodev *dev;
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_cryptodev_globals->devs[i];
+			i < rte_cryptodev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_CRYPTODEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ *  */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ * */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ * */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;
+	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;
+	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;
+	/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;
+	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;
+	/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;
+	/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;
+	/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;
+	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;
+	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;
+	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;
+	/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..31e04d2
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,41 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_virtual_dev_init;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h
index 3121314..bae4054 100644
--- a/lib/librte_eal/common/include/rte_common.h
+++ b/lib/librte_eal/common/include/rte_common.h
@@ -270,8 +270,23 @@ rte_align64pow2(uint64_t v)
 		_a > _b ? _a : _b; \
 	})
 
+
 /*********** Other general functions / macros ********/
 
+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return retval; \
+	} \
+} while (0)
+
+#define FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return; \
+	} \
+} while (0)
+
 #ifdef __SSE2__
 #include <emmintrin.h>
 /**
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index f36a792..948cc0a 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -115,6 +115,20 @@ enum rte_lcore_role_t rte_eal_lcore_role(unsigned lcore_id);
  */
 enum rte_proc_type_t rte_eal_process_type(void);
 
+#define PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return retval; \
+	} \
+} while (0)
+
 /**
  * Request iopl privilege for all RPL.
  *
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..40e8d43 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment.
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..16fde77 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -77,36 +77,6 @@
 #define PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
-/* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
 /* Macros to check for valid port */
 #define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
 	if (!rte_eth_dev_is_valid_port(port_id)) {		\
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..689ef77 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1622,6 +1622,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9e1909e..80f68bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-10-30 16:08     ` Declan Doherty
  2015-10-30 16:34       ` Ananyev, Konstantin
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                       ` (4 subsequent siblings)
  6 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 289 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 9 files changed, 468 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 8ce6af5..96d9d26 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -320,6 +320,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index e7b9b25..c113c88 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -328,6 +328,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 689ef77..6b5c0c2 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -728,6 +728,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +844,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..0a59667
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,289 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = m->offload_ops;
+
+	if (m->offload_ops != NULL && m->offload_ops->type == type)
+		return ol;
+
+	ol = m->offload_ops;
+	while (ol != NULL) {
+		if (ol->type == type)
+			return ol;
+
+		ol = ol->next;
+	}
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 80f68bb..9b4aed3 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -112,6 +112,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-10-30 16:08     ` Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 195 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 174 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 597 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 559 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 117 ++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 429 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 130 +++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |   9 +-
 mk/rte.app.mk                                      |   3 +
 22 files changed, 3609 insertions(+), 8 deletions(-)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 96d9d26..02f10a3 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -155,6 +155,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index c113c88..3f33bc5 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..e417700
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,195 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC** hardware accelerator. QAT PMD has
+current been tested on Fedora 21 64-bit with gcc and on the 4.3 kernel.org
+Linux kernel.
+
+
+Features
+--------
+QAT PMD has support for:
+
+Cipher algorithms:
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for "Installation using
+kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see "Installation using the
+01.org QAT driver".
+
+Installation using 01.org QAT driver
+------------------------------------
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+Compiling the 01.org driver - notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
+Installation using kernel.org driver
+------------------------------------
+
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+Steps below assume
+  * running DPDK on a platform with one DH895xCC device
+  * on a kernel at least version 4.3
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing:
+    lsmod | grep qat
+
+You should see the following output:
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+
+First find the bdf of the DH895xCC device:
+    lspci -d : 435
+
+You should see output similar to:
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs:
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use "lspci -d:443" to confirm
+the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:03:0${device}.${fn} > /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
+
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..9529f30
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..c9e3459
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,174 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) \
+			= (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..cc96d45
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..7671465
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d8fe38
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..fb3a685
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..d8c44a1
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,597 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+	* Redistributions of source code must retain the above copyright
+	  notice, this list of conditions and the following disclaimer.
+	* Redistributions in binary form must reproduce the above copyright
+	  notice, this list of conditions and the following disclaimer in
+	  the documentation and/or other materials provided with the
+	  distribution.
+	* Neither the name of Intel Corporation nor the names of its
+	  contributors may be used to endorse or promote products derived
+	  from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/* returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(
+			ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ));
+				memset(out -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ),
+					0, ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	} else {
+		rte_memcpy(ipad, auth_key, auth_keylen);
+		rte_memcpy(opad, auth_key, auth_keylen);
+	}
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/* state len is a multiple of 8, so may be larger than the digest.
+	   Put the partial hash of opad state_len bytes after state1 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(
+			cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert,
+			cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/* write (the length of AAD) into bytes 16-19 of state2
+		* in big-endian format. This field is 8 bytes */
+		*(uint32_t *)&(hash->sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1),
+			&state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+					>> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated "
+				"encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..b3665be
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,559 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t shift);
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+				return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+					resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr +
+							queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length =
+			qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr =
+			qat_req->comn_mid.src_data_addr =
+					rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <=
+				sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array,
+				ol->op.crypto.iv.data, ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE *
+				ADF_NUM_BUNDLES_PER_DEV;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..0fbec73
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,117 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*	This macro rounds up a number to a be a multiple of
+ *	the alignment when the alignment is a power of 2    */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;
+	/* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t
+qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..4398a62
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2
+/* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		}
+
+		PMD_DRV_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space "
+				"(UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >=
+			(ADF_NUM_SYM_QPS_PER_BUNDLE *
+					ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device",
+				queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (NULL != mz)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+			socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
+			queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n",
+					queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size))
+			!= 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..63cb5fc
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.0 {
+	local: *;
+};
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..49a936f
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,130 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
+
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
index 0a59667..de8aec2 100644
--- a/lib/librte_mbuf_offload/rte_mbuf_offload.h
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -122,17 +122,10 @@ rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
 {
 	struct rte_mbuf_offload *ol = m->offload_ops;
 
-	if (m->offload_ops != NULL && m->offload_ops->type == type)
-		return ol;
-
-	ol = m->offload_ops;
-	while (ol != NULL) {
+	for (ol = m->offload_ops; ol != NULL; ol = ol->next)
 		if (ol->type == type)
 			return ol;
 
-		ol = ol->next;
-	}
-
 	return ol;
 }
 
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9b4aed3..5d960cd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -145,6 +145,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
                       ` (2 preceding siblings ...)
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-10-30 16:08     ` Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   6 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 ++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  67 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 212 ++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 790 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 296 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 230 ++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 12 files changed, 1693 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 02f10a3..5a177c3 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 3f33bc5..621e787 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,12 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 9529f30..26325b0 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..62f51ce
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# export include files
+SYMLINK-y-include +=
+
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..3d15a68
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,212 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+#include <gcm_defines.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_128_enc_t)(void *key, void *enc_exp_keys);
+typedef void (*aes_keyexp_192_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)(void *key, void *exp_k1, void *k2, void *k3);
+
+typedef void (*aesni_gcm_t)(gcm_data *my_ctx_data, u8 *out, const u8 *in,
+		u64 plaintext_len, u8 *iv, const u8 *aad, u64 aad_len,
+		u8 *auth_tag, u64 auth_tag_len);
+
+typedef void (*aesni_gcm_precomp_t)(gcm_data *my_ctx_data, u8 *hash_subkey);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;		/**< Initialise scheduler  */
+		get_next_job_t get_next;	/**< Get next free job structure */
+		submit_job_t submit;		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job; /**< Get completed job */
+		flush_job_t flush_job;		/**< flush jobs from manager */
+	} job; /**< multi buffer manager functions */
+	struct {
+		struct {
+			md5_one_block_t md5;		/**< MD5 one block hash */
+			sha1_one_block_t sha1;		/**< SHA1 one block hash */
+			sha224_one_block_t sha224;	/**< SHA224 one block hash */
+			sha256_one_block_t sha256;	/**< SHA256 one block hash */
+			sha384_one_block_t sha384;	/**< SHA384 one block hash */
+			sha512_one_block_t sha512;	/**< SHA512 one block hash */
+		} one_block; /**< one block hash functions */
+		struct {
+			aes_keyexp_128_t aes128;	/**< AES128 key expansions */
+			aes_keyexp_128_enc_t aes128_enc;/**< AES128 enc key expansion */
+			aes_keyexp_192_t aes192;	/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;	/**< AES256 key expansions */
+			aes_xcbc_expand_key_t aes_xcbc;	/**< AES XCBC key expansions */
+		} keyexp;	/**< Key expansion functions */
+	} aux; /**< Auxiliary functions */
+	struct {
+
+		aesni_gcm_t enc;		/**< MD5 encode */
+		aesni_gcm_t dec;		/**< GCM decode */
+		aesni_gcm_precomp_t precomp;	/**< GCM pre-compute */
+	} gcm; /**< GCM functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = { NULL },
+			.aux = {
+				.one_block = { NULL },
+				.keyexp = { NULL }
+			},
+			.gcm = { NULL
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_128_enc_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			},
+			.gcm = {
+				aesni_gcm_enc_sse,
+				aesni_gcm_dec_sse,
+				aesni_gcm_precomp_sse
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+				.job = {
+					init_mb_mgr_avx,
+					get_next_job_avx,
+					submit_job_avx,
+					get_completed_job_avx,
+					flush_job_avx
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx,
+						sha1_one_block_avx,
+						sha224_one_block_avx,
+						sha256_one_block_avx,
+						sha384_one_block_avx,
+						sha512_one_block_avx
+					},
+					.keyexp = {
+						aes_keyexp_128_avx,
+						aes_keyexp_128_enc_avx,
+						aes_keyexp_192_avx,
+						aes_keyexp_256_avx,
+						aes_xcbc_expand_key_avx
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen2,
+					aesni_gcm_dec_avx_gen2,
+					aesni_gcm_precomp_avx_gen2
+				}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+				.job = {
+					init_mb_mgr_avx2,
+					get_next_job_avx2,
+					submit_job_avx2,
+					get_completed_job_avx2,
+					flush_job_avx2
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx2,
+						sha1_one_block_avx2,
+						sha224_one_block_avx2,
+						sha256_one_block_avx2,
+						sha384_one_block_avx2,
+						sha512_one_block_avx2
+					},
+					.keyexp = {
+						aes_keyexp_128_avx2,
+						aes_keyexp_128_enc_avx2,
+						aes_keyexp_192_avx2,
+						aes_keyexp_256_avx2,
+						aes_xcbc_expand_key_avx2
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen4,
+					aesni_gcm_dec_avx_gen4,
+					aesni_gcm_precomp_avx_gen4
+				}
+		},
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..e469f6d
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,790 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/* multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess, cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+aesni_mb_get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->mb_ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_mb_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->mb_ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/* Multiple buffer library current only support returning a truncated digest length
+	 * as specified in the relevant IPsec RFCs */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ *
+ */
+static int
+process_gcm_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	uint8_t *src, *dst;
+
+	src = rte_pktmbuf_mtod(m, uint8_t *) + c_op->data.to_cipher.offset;
+	dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	if (session->cipher.direction == ENCRYPT) {
+
+		(*qp->mb_ops->gcm.enc)(&session->gdata, dst, src,
+				(uint64_t)c_op->data.to_cipher.length,
+				c_op->iv.data,
+				c_op->additional_auth.data,
+				(uint64_t)c_op->additional_auth.length,
+				c_op->digest.data,
+				(uint64_t)c_op->digest.length);
+	} else {
+		uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(m,
+				c_op->digest.length);
+
+		if (!auth_tag)
+			return -1;
+
+		(*qp->mb_ops->gcm.dec)(&session->gdata, dst, src,
+				(uint64_t)c_op->data.to_cipher.length,
+				c_op->iv.data,
+				c_op->additional_auth.data,
+				(uint64_t)c_op->additional_auth.length,
+				auth_tag,
+				(uint64_t)c_op->digest.length);
+	}
+
+	return 0;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		else
+			c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	} else {
+		c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_mb_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->mb_ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_gcm_crypto_op(struct rte_mbuf *m, struct rte_crypto_op *c_op)
+{
+	struct aesni_mb_session *session =
+			(struct aesni_mb_session *)c_op->session->_private;
+
+	/* Verify digest if required */
+	if (session->cipher.direction == DECRYPT) {
+
+		uint8_t *auth_tag = rte_pktmbuf_mtod_offset(m, uint8_t *,
+				m->data_len - c_op->digest.length);
+
+		if (memcmp(auth_tag, c_op->digest.data,
+				c_op->digest.length) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		else
+			c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, c_op->digest.length);
+	} else {
+		c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed GCM request
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_gcm_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op)
+{
+	m = post_process_gcm_crypto_op(m, c_op);
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	rte_ring_enqueue(qp->processed_pkts, (void *)m);
+
+	return 0;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *c_op;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+	JOB_AES_HMAC *job = NULL;
+
+	int i, retval, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+		c_op = &ol->op.crypto;
+
+		sess = aesni_mb_get_session(qp, c_op);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		if (sess->gcm_session) {
+			retval = process_gcm_crypto_op(qp, bufs[i], c_op, sess);
+			if (retval < 0) {
+				qp->qp_stats.enqueue_err_count++;
+				goto flush_jobs;
+			}
+
+			handle_completed_gcm_crypto_op(qp, bufs[i], c_op);
+			processed_jobs++;
+		} else {
+			job = process_mb_crypto_op(qp, bufs[i], c_op, sess);
+			if (unlikely(job == NULL)) {
+				qp->qp_stats.enqueue_err_count++;
+				goto flush_jobs;
+			}
+
+			/* Submit Job */
+			job = (*qp->mb_ops->job.submit)(&qp->mb_mgr);
+
+			/* If submit returns a processed job then handle it,
+			 * before submitting subsequent jobs */
+			if (job)
+				processed_jobs +=
+					handle_completed_mb_jobs(qp, job);
+		}
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/* if we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling */
+	job = (*qp->mb_ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count +=
+				handle_completed_mb_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_qpairs = AESNI_MB_MAX_NB_QUEUE_PAIRS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..41b8d04
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,296 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_queue_pairs = internals->max_nb_qpairs;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->mb_ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->mb_ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/* Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
+
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..e5e317b
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,230 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define AESNI_MB_NAME_MAX_LENGTH	(64)
+#define AESNI_MB_MAX_NB_QUEUE_PAIRS	(4)
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+
+	unsigned max_nb_qpairs;
+};
+
+struct aesni_mb_qp {
+	uint16_t id;				/**< Queue Pair Identifier */
+	char name[AESNI_MB_NAME_MAX_LENGTH];	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *mb_ops;	/**< Architecture dependent
+						 * function pointer table of
+						 * the multi-buffer APIs */
+	MB_MGR mb_mgr;				/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;	/**< Ring for placing process packets */
+
+	struct rte_mempool *sess_mp;		/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	unsigned gcm_session:1;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1)) */
+	} cipher;
+
+	union {
+		/** Authentication Parameters */
+		struct {
+			JOB_HASH_ALG algo; /**< Authentication Algorithm */
+			union {
+				struct {
+					uint8_t inner[128] __rte_aligned(16);
+					/**< inner pad */
+					uint8_t outer[128] __rte_aligned(16);
+					/**< outer pad */
+				} pads;
+				/**< HMAC Authentication pads -
+				 * allocating space for the maximum pad
+				 * size supported which is 128 bytes for
+				 * SHA512 */
+
+				struct {
+				    uint32_t k1_expanded[44] __rte_aligned(16);
+				    /**< k1 (expanded key). */
+				    uint8_t k2[16] __rte_aligned(16);
+				    /**< k2. */
+				    uint8_t k3[16] __rte_aligned(16);
+				    /**< k3. */
+				} xcbc;
+				/**< Expanded XCBC authentication keys */
+			};
+		} auth;
+
+		/** GCM parameters */
+		struct gcm_data gdata;
+	};
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
+
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d960cd..6255d4e 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -148,6 +148,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 5/6] app/test: add cryptodev unit and performance tests
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
                       ` (3 preceding siblings ...)
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-10-30 16:08     ` Declan Doherty
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 6/6] l2fwd-crypto: crypto Declan Doherty
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
---
 app/test/Makefile                  |    3 +
 app/test/test.c                    |   92 +-
 app/test/test.h                    |   34 +-
 app/test/test_cryptodev.c          | 1968 ++++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h          |   68 ++
 app/test/test_cryptodev_perf.c     | 1449 ++++++++++++++++++++++++++
 app/test/test_link_bonding.c       |    6 +-
 app/test/test_link_bonding_mode4.c |    7 +-
 8 files changed, 3582 insertions(+), 45 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/app/test/Makefile b/app/test/Makefile
index 294618f..b7de576 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -140,6 +140,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += test_link_bonding_mode4.c
 endif
 
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
diff --git a/app/test/test.c b/app/test/test.c
index e8992f4..e58f266 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..1adde8b
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1968 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+				"CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type) {
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+		}
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		/* Since we can't free and re-allocate queue memory always set
+		 * the queues on this device up to max size first so enough
+		 * memory is allocated for any later re-configures needed by
+		 * other tests */
+
+		ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs =
+				(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < MAX_NUM_QPS_PER_QAT_DEVICE; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on "
+					"cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/* Now reconfigure queues to size we actually want to use in this
+	 * test suite. */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/* free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count - 1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	ts_params->conf.session_mp.nb_objs = RTE_LIBRTE_PMD_QAT_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/* Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created. */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights "
+				"%u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2);
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE; 		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31,
+	0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E,
+	0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E,
+	0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0,
+	0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57,
+	0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9,
+	0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D,
+	0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46,
+	0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80,
+	0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5,
+	0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2,
+	0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA,
+	0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4,
+	0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4,
+	0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54,
+	0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91,
+	0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF,
+	0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28,
+	0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7,
+	0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6,
+	0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C,
+	0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6,
+	0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6,
+	0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87,
+	0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B,
+	0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53,
+	0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26,
+	0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36,
+	0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E,
+	0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A,
+	0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4,
+	0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1,
+	0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length =
+			DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+			ut_params) == TEST_SUCCESS,
+			"Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)
+		rte_pktmbuf_prepend(ut_params->ibuf,
+				CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600,
+			&stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats)
+			== -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned nb_sessions = gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD ?
+			RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+			RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+	struct rte_cryptodev_session *sessions[nb_sessions + 1];
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(
+				ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i],
+				"Session creation failed at session number %u",
+				i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request "
+				"number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i],
+			"Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				sessions[i]);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_testsuite  = {
+	.suite_name = "Crypto Device AESNI Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_testsuite);
+}
+
+static struct test_command cryptodev_aesni_cmd = {
+	.command = "cryptodev_aesni_autotest",
+	.callback = test_cryptodev_aesni,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..1f9e1a2
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,1449 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/* Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests */
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50, 0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36, 0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c, 0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6, 0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4, 0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d, 0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d, 0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5, 0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99, 0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6, 0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e, 0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97, 0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa, 0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36, 0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41, 0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7, 0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85, 0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5, 0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6, 0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe, 0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1, 0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e, 0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07, 0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26, 0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80, 0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76, 0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7, 0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf, 0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70, 0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26, 0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30, 0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64, 0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb, 0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c, 0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a, 0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19, 0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23, 0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf, 0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe, 0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d, 0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed, 0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36, 0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7, 0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3, 0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e, 0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49, 0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf, 0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12, 0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76, 0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08, 0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b, 0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9, 0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c, 0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77, 0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb, 0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86, 0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e, 0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b, 0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11, 0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c, 0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69, 0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1, 0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06, 0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e, 0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08, 0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe, 0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f, 0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23, 0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26, 0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c, 0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7, 0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb, 0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2, 0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38, 0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02, 0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28, 0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03, 0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07, 0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61, 0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef, 0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90, 0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0, 0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3, 0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0, 0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0, 0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7, 0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9, 0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d, 0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6, 0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43, 0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1, 0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d, 0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf, 0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b, 0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07, 0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53, 0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35, 0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6, 0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3, 0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40, 0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08, 0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed, 0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7, 0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f, 0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48, 0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68, 0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1, 0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9, 0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72, 0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26, 0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8, 0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25, 0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9, 0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7, 0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7, 0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab, 0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb, 0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54, 0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38, 0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22, 0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde, 0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc, 0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33, 0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13, 0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c, 0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99, 0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48, 0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90, 0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d, 0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d, 0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff, 0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1, 0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f, 0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6, 0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40, 0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48, 0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41, 0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca, 0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07, 0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9, 0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58, 0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39, 0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53, 0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2, 0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2, 0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83, 0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2, 0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54, 0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f, 0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c, 0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f, 0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e, 0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4, 0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf, 0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27, 0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18, 0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e, 0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63, 0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67, 0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda, 0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48, 0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6, 0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d, 0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac, 0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32, 0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7, 0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26, 0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b, 0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2, 0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a, 0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1, 0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24, 0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45, 0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89, 0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82, 0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1, 0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6, 0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3, 0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4, 0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39, 0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58, 0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe, 0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45, 0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5, 0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a, 0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12, 0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c, 0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f, 0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4, 0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04, 0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5, 0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d, 0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab, 0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e, 0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27, 0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f, 0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b, 0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd, 0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3, 0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2, 0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21, 0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff, 0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc, 0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8, 0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb, 0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10, 0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05, 0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23, 0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab, 0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88, 0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83, 0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74, 0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7, 0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18, 0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee, 0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f, 0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97, 0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17, 0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67, 0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b, 0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc, 0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e, 0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2, 0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92, 0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92, 0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b, 0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5, 0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5, 0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a, 0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69, 0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d, 0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14, 0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80, 0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0, 0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76, 0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd, 0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad, 0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61, 0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd, 0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67, 0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35, 0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd, 0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87, 0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda, 0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4, 0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30, 0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c, 0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5, 0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06, 0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f, 0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d, 0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9, 0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62, 0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53, 0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a, 0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0, 0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b, 0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8, 0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a, 0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a, 0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0, 0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9, 0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce, 0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5, 0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09, 0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b, 0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13, 0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24, 0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3, 0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3, 0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f, 0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8, 0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47, 0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d, 0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f, 0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6, 0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e, 0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a, 0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51, 0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e, 0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee, 0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3, 0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99, 0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3, 0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8, 0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf, 0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10, 0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c, 0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c, 0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d, 0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72, 0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0, 0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58, 0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe, 0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13, 0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b, 0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15, 0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85, 0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c, 0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8, 0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77, 0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42, 0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59, 0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32, 0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6, 0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb, 0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48, 0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2, 0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38, 0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f, 0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb, 0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7, 0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06, 0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61, 0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b, 0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29, 0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc, 0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0, 0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37, 0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26, 0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69, 0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba, 0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4, 0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba, 0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f, 0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77, 0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30, 0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70, 0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54, 0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e, 0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e, 0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3, 0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13, 0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda, 0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28, 0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5, 0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98, 0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2, 0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb, 0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83, 0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96, 0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76, 0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b, 0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59, 0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4, 0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9, 0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42, 0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf, 0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1, 0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48, 0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac, 0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11, 0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1, 0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f, 0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5, 0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19, 0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6, 0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19, 0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45, 0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20, 0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d, 0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7, 0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25, 0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42, 0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49, 0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9, 0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8, 0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86, 0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0, 0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc, 0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a, 0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04, 0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c, 0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92, 0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25, 0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11, 0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73, 0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96, 0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67, 0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2, 0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a, 0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e, 0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74, 0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef, 0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e, 0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e, 0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf, 0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3, 0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff, 0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9, 0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8, 0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e, 0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91, 0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f, 0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27, 0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d, 0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3, 0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e, 0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7, 0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f, 0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca, 0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f, 0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19, 0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea, 0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02, 0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04, 0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25, 0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84, 0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9, 0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0, 0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98, 0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51, 0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c, 0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a, 0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a, 0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38, 0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca, 0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b, 0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a, 0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06, 0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3, 0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e, 0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00, 0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b, 0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c, 0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa, 0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d, 0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a, 0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3, 0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58, 0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17, 0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4, 0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b, 0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92, 0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b, 0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98, 0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd, 0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d, 0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9, 0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02, 0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03, 0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07, 0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7, 0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96, 0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68, 0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1, 0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46, 0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37, 0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde, 0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0, 0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e, 0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d, 0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48, 0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43, 0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40, 0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d, 0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9, 0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c, 0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3, 0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34, 0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c, 0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07, 0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c, 0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81, 0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5, 0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde, 0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55, 0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f, 0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff, 0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62, 0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60, 0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4, 0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79, 0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9, 0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32, 0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c, 0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b, 0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a, 0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61, 0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca, 0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae, 0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c, 0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b, 0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e, 0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f, 0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05, 0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9, 0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a, 0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f, 0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34, 0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c, 0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e, 0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b, 0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44, 0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8, 0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b, 0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8, 0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf, 0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4, 0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8, 0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0, 0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59, 0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c, 0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75, 0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd, 0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6, 0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41, 0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9, 0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b, 0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe, 0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1, 0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0, 0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f, 0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf, 0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85, 0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12, 0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82, 0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83, 0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14, 0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c, 0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f, 0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1, 0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d, 0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5, 0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7, 0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85, 0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41, 0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0, 0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca, 0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83, 0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93, 0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6, 0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf, 0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c, 0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73, 0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb, 0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5, 0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24, 0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3, 0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a, 0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8, 0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca, 0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba, 0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a, 0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9, 0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b, 0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b, 0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b, 0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf, 0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69, 0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c, 0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3, 0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15, 0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68, 0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4, 0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2, 0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a, 0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45, 0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8, 0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9, 0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7, 0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34, 0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed, 0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf, 0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5, 0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e, 0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36, 0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40, 0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9, 0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa, 0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3, 0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22, 0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e, 0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89, 0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba, 0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97, 0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4, 0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f, 0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7, 0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6, 0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37, 0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76, 0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02, 0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b, 0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52, 0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f, 0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3, 0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83, 0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+		{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64], { AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+		{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128], { AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+		{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256], { AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+		{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512], { AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+		{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768], { AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+		{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024], { AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+		{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280], { AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+		{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536], { AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+		{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792], { AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+		{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048], { AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b], data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b], CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC algorithm with "
+			"a constant request size of %u.", data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost (assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send AES128_CBC_SHA256_HMAC requests "
+		"with a constant burst size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\tMrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext, data_params[index].length, 0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0, data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown, test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v3 6/6] l2fwd-crypto: crypto
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
                       ` (4 preceding siblings ...)
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-10-30 16:08     ` Declan Doherty
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
  6 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-10-30 16:08 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1472 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1522 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..9fd8bc5
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1472 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000; /* default period is 10 seconds */
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
+
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-10-30 16:34       ` Ananyev, Konstantin
  0 siblings, 0 replies; 115+ messages in thread
From: Ananyev, Konstantin @ 2015-10-30 16:34 UTC (permalink / raw)
  To: Doherty, Declan, dev


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
> Sent: Friday, October 30, 2015 4:09 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf
> 
> This library add support for adding a chain of offload operations to a
> mbuf. It contains the definition of the rte_mbuf_offload structure as
> well as helper functions for attaching  offloads to mbufs and a mempool
> management functions.
> 
> This initial implementation supports attaching multiple offload
> operations to a single mbuf, but only a single offload operation of a
> specific type can be attach to that mbuf.
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework
  2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
                       ` (5 preceding siblings ...)
  2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 6/6] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-03 17:45     ` Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                         ` (7 more replies)
  6 siblings, 8 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.

v4:
 - Some more EOF whitespace and checkpatch fixes

v3:
 - Fixes a document build error, which I missed in the V2
 - Fixes for remaining checkpatch errors
 - Disables QAT and AESNI_MB PMD being build by default as they have external 
   library dependences 

v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to new rte_mbuf_offload structure
 - General bug fixes and code tidy up


Declan Doherty (6):
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 app/test/Makefile                                  |    4 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1975 +++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 2063 ++++++++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   36 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  194 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   63 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  212 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  798 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  297 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  232 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  174 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 +++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  601 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  557 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  119 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  429 ++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  131 ++
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1473 ++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  613 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1092 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  647 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  543 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   41 +
 lib/librte_eal/common/include/rte_common.h         |   15 +
 lib/librte_eal/common/include/rte_eal.h            |   14 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |   30 -
 lib/librte_mbuf/rte_mbuf.h                         |   33 +
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  284 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 57 files changed, 14589 insertions(+), 79 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
@ 2015-11-03 17:45       ` Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                         ` (6 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  613 +++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1092 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  647 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  543 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   41 +
 lib/librte_eal/common/include/rte_common.h     |   15 +
 lib/librte_eal/common/include/rte_eal.h        |   14 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 lib/librte_eal/common/include/rte_memory.h     |   14 +-
 lib/librte_ether/rte_ethdev.c                  |   30 -
 lib/librte_mbuf/rte_mbuf.h                     |   27 +
 mk/rte.app.mk                                  |    1 +
 18 files changed, 3087 insertions(+), 34 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index f202d2f..e017feb 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index c1d4bbd..3cbe233 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..81fa3fc
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
\ No newline at end of file
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..7cf0439
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,613 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid.
+	 */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both
+	 * keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto
+	 * operation data structures op_params parameter MUST be set for a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;
+	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;
+	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union
+ */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC)
+			  * mode of operation, this field specifies the start
+			  * of the AAD data in the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source
+			  * buffer that the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up
+		 * for the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by
+		 *   the implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth;
+	/**< Additional authentication parameters */
+
+	struct rte_mempool *pool;
+	/**< mempool used to allocate crypto op */
+
+	void *user_data;
+	/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..663065c
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1092 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;		/**< Callback address */
+	void *cb_arg;				/**< Parameter for callback */
+	enum rte_cryptodev_event_type event;	/**< Interrupt event type */
+	uint32_t active;			/**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if ((strcmp(rte_cryptodev_globals->devs[i].data->name, name)
+				== 0) &&
+				(rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED))
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached ==
+				RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV,
+			rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket(
+						"cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/*
+	 * Register PCI driver for physical device intialisation during
+	 * PCI probing
+	 */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
+		int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u",
+							nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+
+		qp = dev->data->queue_pairs;
+
+		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release,
+				-ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in use */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) +
+			priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if ((dev->data->session_pool->elt_size != elt_size) ||
+				(dev->data->session_pool->cache_size <
+				obj_cache_size) ||
+				(dev->data->session_pool->size < nb_objs)) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different"
+					" initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..b64ac28
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,647 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+	uint16_t max_queue_pairs;		/**< Maximum number of queues
+						* pairs supported by device.
+						*/
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;	/**< Accumulated time processing operation */
+	uint64_t t_min;		/**< Max time */
+	uint64_t t_max;		/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;
+	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;
+	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;
+	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;
+	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf; /**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;
+	/**< Number of queue pairs to configure on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the
+ *				device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;
+	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;
+	/**< Driver for this device */
+	struct rte_cryptodev_data *data;
+	/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;
+	/**< Crypto device type */
+	enum pmd_type pmd_type;
+	/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t socket_id;
+	/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;
+	/**< Session memory pool */
+	void **queue_pairs;
+	/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;
+	/**< Number of device queue pairs. */
+
+	void *dev_private;
+	/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function
+ * until a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..db940d1
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,543 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];
+	/**< Device private data */
+	uint8_t nb_devs;		/**< Number of devices found */
+	uint8_t max_devs;		/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	struct rte_cryptodev *dev;
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_cryptodev_globals->devs[i];
+			i < rte_cryptodev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_CRYPTODEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;
+	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;
+	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;
+	/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;
+	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;
+	/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;
+	/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;
+	/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;
+	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;
+	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;
+	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;
+	/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..31e04d2
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,41 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_virtual_dev_init;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h
index 3121314..bae4054 100644
--- a/lib/librte_eal/common/include/rte_common.h
+++ b/lib/librte_eal/common/include/rte_common.h
@@ -270,8 +270,23 @@ rte_align64pow2(uint64_t v)
 		_a > _b ? _a : _b; \
 	})
 
+
 /*********** Other general functions / macros ********/
 
+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return retval; \
+	} \
+} while (0)
+
+#define FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_LOG(ERR, PMD, "Function not supported"); \
+		return; \
+	} \
+} while (0)
+
 #ifdef __SSE2__
 #include <emmintrin.h>
 /**
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index d2816a8..c54a792 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -118,6 +118,20 @@ enum rte_lcore_role_t rte_eal_lcore_role(unsigned lcore_id);
  */
 enum rte_proc_type_t rte_eal_process_type(void);
 
+#define PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_LOG(ERR, PMD, "Cannot run in secondary processes"); \
+		return retval; \
+	} \
+} while (0)
+
 /**
  * Request iopl privilege for all RPL.
  *
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..40e8d43 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment.
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 58aaeb2..697e799 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -77,36 +77,6 @@
 #define PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
-/* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
 /* Macros to check for valid port */
 #define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
 	if (!rte_eth_dev_is_valid_port(port_id)) {		\
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..689ef77 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1622,6 +1622,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 724efa7..5d382bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -118,6 +118,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 2/6] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-03 17:45       ` Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                         ` (5 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 291 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 9 files changed, 470 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index e017feb..fe90d94 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -331,6 +331,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 3cbe233..b4f9c88 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -339,6 +339,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 689ef77..6b5c0c2 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -728,6 +728,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +844,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..ea97d16
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,291 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain
+ */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = m->offload_ops;
+
+	if (m->offload_ops != NULL && m->offload_ops->type == type)
+		return ol;
+
+	ol = m->offload_ops;
+	while (ol != NULL) {
+		if (ol->type == type)
+			return ol;
+
+		ol = ol->next;
+	}
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL
+ */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d382bb..2b8ddce 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-03 17:45       ` Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                         ` (4 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 194 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 174 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 601 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 557 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 119 ++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 429 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 131 +++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |   9 +-
 mk/rte.app.mk                                      |   3 +
 22 files changed, 3613 insertions(+), 8 deletions(-)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index fe90d94..a0a5ea4 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -155,6 +155,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index b4f9c88..deb012f 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..9e24c07
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,194 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC** hardware accelerator. QAT PMD has
+current been tested on Fedora 21 64-bit with gcc and on the 4.3 kernel.org
+Linux kernel.
+
+
+Features
+--------
+QAT PMD has support for:
+
+Cipher algorithms:
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for "Installation using
+kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see "Installation using the
+01.org QAT driver".
+
+Installation using 01.org QAT driver
+------------------------------------
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+Compiling the 01.org driver - notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
+Installation using kernel.org driver
+------------------------------------
+
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+Steps below assume
+  * running DPDK on a platform with one DH895xCC device
+  * on a kernel at least version 4.3
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing:
+    lsmod | grep qat
+
+You should see the following output:
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+
+First find the bdf of the DH895xCC device:
+    lspci -d : 435
+
+You should see output similar to:
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs:
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use "lspci -d:443" to confirm
+the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:03:0${device}.${fn} > /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..f6aecea
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
\ No newline at end of file
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..47f1c91
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,174 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) \
+			= (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..498ee83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..fbf2b83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d4d8e4
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..76c08c0
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..ceaffb7
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,601 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *	  notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *	  notice, this list of conditions and the following disclaimer in
+ *	  the documentation and/or other materials provided with the
+ *	  distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *	  contributors may be used to endorse or promote products derived
+ *	  from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/*
+ * Returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword
+ */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(
+			ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ));
+				memset(out -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ),
+					0, ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	}
+	rte_memcpy(ipad, auth_key, auth_keylen);
+	rte_memcpy(opad, auth_key, auth_keylen);
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/*
+	 * State len is a multiple of 8, so may be larger than the digest.
+	 * Put the partial hash of opad state_len bytes after state1
+	 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(
+			cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert,
+			cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/*
+		 * Write (the length of AAD) into bytes 16-19 of state2
+		 * in big-endian format. This field is 8 bytes
+		 */
+		*(uint32_t *)&(hash->sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1),
+			&state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+					>> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated "
+				"encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..f8840e5
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,557 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t shift);
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+			return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+					resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr +
+							queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length =
+			qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr =
+			qat_req->comn_mid.src_data_addr =
+					rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <=
+				sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array,
+				ol->op.crypto.iv.data, ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE *
+				ADF_NUM_BUNDLES_PER_DEV;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..5d22b34
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,119 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*
+ * This macro rounds up a number to a be a multiple of
+ * the alignment when the alignment is a power of 2
+ */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;
+	/* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t
+qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..ec5852d
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2
+/* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		}
+
+		PMD_DRV_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space "
+				"(UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >=
+			(ADF_NUM_SYM_QPS_PER_BUNDLE *
+					ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device",
+				queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+			socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
+			queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n",
+					queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size))
+			!= 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..85cbb56
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.0 {
+	local: *;
+};
\ No newline at end of file
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..4772b55
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,131 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
index ea97d16..e903b98 100644
--- a/lib/librte_mbuf_offload/rte_mbuf_offload.h
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -123,17 +123,10 @@ rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
 {
 	struct rte_mbuf_offload *ol = m->offload_ops;
 
-	if (m->offload_ops != NULL && m->offload_ops->type == type)
-		return ol;
-
-	ol = m->offload_ops;
-	while (ol != NULL) {
+	for (ol = m->offload_ops; ol != NULL; ol = ol->next)
 		if (ol->type == type)
 			return ol;
 
-		ol = ol->next;
-	}
-
 	return ol;
 }
 
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b8ddce..cfcb064 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -150,6 +150,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
                         ` (2 preceding siblings ...)
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-03 17:45       ` Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
                         ` (3 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   6 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 ++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  63 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 212 ++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 798 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 297 ++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 232 ++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 12 files changed, 1700 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/config/common_bsdapp b/config/common_bsdapp
index a0a5ea4..fb2aabf 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index deb012f..bf12f01 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,12 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index f6aecea..d07ee96 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..3bf83d1
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..3d15a68
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,212 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+#include <gcm_defines.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_128_enc_t)(void *key, void *enc_exp_keys);
+typedef void (*aes_keyexp_192_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)(void *key, void *exp_k1, void *k2, void *k3);
+
+typedef void (*aesni_gcm_t)(gcm_data *my_ctx_data, u8 *out, const u8 *in,
+		u64 plaintext_len, u8 *iv, const u8 *aad, u64 aad_len,
+		u8 *auth_tag, u64 auth_tag_len);
+
+typedef void (*aesni_gcm_precomp_t)(gcm_data *my_ctx_data, u8 *hash_subkey);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;		/**< Initialise scheduler  */
+		get_next_job_t get_next;	/**< Get next free job structure */
+		submit_job_t submit;		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job; /**< Get completed job */
+		flush_job_t flush_job;		/**< flush jobs from manager */
+	} job; /**< multi buffer manager functions */
+	struct {
+		struct {
+			md5_one_block_t md5;		/**< MD5 one block hash */
+			sha1_one_block_t sha1;		/**< SHA1 one block hash */
+			sha224_one_block_t sha224;	/**< SHA224 one block hash */
+			sha256_one_block_t sha256;	/**< SHA256 one block hash */
+			sha384_one_block_t sha384;	/**< SHA384 one block hash */
+			sha512_one_block_t sha512;	/**< SHA512 one block hash */
+		} one_block; /**< one block hash functions */
+		struct {
+			aes_keyexp_128_t aes128;	/**< AES128 key expansions */
+			aes_keyexp_128_enc_t aes128_enc;/**< AES128 enc key expansion */
+			aes_keyexp_192_t aes192;	/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;	/**< AES256 key expansions */
+			aes_xcbc_expand_key_t aes_xcbc;	/**< AES XCBC key expansions */
+		} keyexp;	/**< Key expansion functions */
+	} aux; /**< Auxiliary functions */
+	struct {
+
+		aesni_gcm_t enc;		/**< MD5 encode */
+		aesni_gcm_t dec;		/**< GCM decode */
+		aesni_gcm_precomp_t precomp;	/**< GCM pre-compute */
+	} gcm; /**< GCM functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = { NULL },
+			.aux = {
+				.one_block = { NULL },
+				.keyexp = { NULL }
+			},
+			.gcm = { NULL
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_128_enc_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			},
+			.gcm = {
+				aesni_gcm_enc_sse,
+				aesni_gcm_dec_sse,
+				aesni_gcm_precomp_sse
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+				.job = {
+					init_mb_mgr_avx,
+					get_next_job_avx,
+					submit_job_avx,
+					get_completed_job_avx,
+					flush_job_avx
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx,
+						sha1_one_block_avx,
+						sha224_one_block_avx,
+						sha256_one_block_avx,
+						sha384_one_block_avx,
+						sha512_one_block_avx
+					},
+					.keyexp = {
+						aes_keyexp_128_avx,
+						aes_keyexp_128_enc_avx,
+						aes_keyexp_192_avx,
+						aes_keyexp_256_avx,
+						aes_xcbc_expand_key_avx
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen2,
+					aesni_gcm_dec_avx_gen2,
+					aesni_gcm_precomp_avx_gen2
+				}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+				.job = {
+					init_mb_mgr_avx2,
+					get_next_job_avx2,
+					submit_job_avx2,
+					get_completed_job_avx2,
+					flush_job_avx2
+				},
+				.aux = {
+					.one_block = {
+						md5_one_block_avx2,
+						sha1_one_block_avx2,
+						sha224_one_block_avx2,
+						sha256_one_block_avx2,
+						sha384_one_block_avx2,
+						sha512_one_block_avx2
+					},
+					.keyexp = {
+						aes_keyexp_128_avx2,
+						aes_keyexp_128_enc_avx2,
+						aes_keyexp_192_avx2,
+						aes_keyexp_256_avx2,
+						aes_xcbc_expand_key_avx2
+					}
+				},
+				.gcm = {
+					aesni_gcm_enc_avx_gen4,
+					aesni_gcm_dec_avx_gen4,
+					aesni_gcm_precomp_avx_gen4
+				}
+		},
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..a008ece
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,798 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/*
+	 * Multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together
+	 */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess, cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+aesni_mb_get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->mb_ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_mb_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->mb_ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/*
+	 * Multiple buffer library current only support returning a truncated
+	 * digest length as specified in the relevant IPsec RFCs
+	 */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ *
+ */
+static int
+process_gcm_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	uint8_t *src, *dst;
+
+	src = rte_pktmbuf_mtod(m, uint8_t *) + c_op->data.to_cipher.offset;
+	dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	if (session->cipher.direction == ENCRYPT) {
+
+		(*qp->mb_ops->gcm.enc)(&session->gdata, dst, src,
+				(uint64_t)c_op->data.to_cipher.length,
+				c_op->iv.data,
+				c_op->additional_auth.data,
+				(uint64_t)c_op->additional_auth.length,
+				c_op->digest.data,
+				(uint64_t)c_op->digest.length);
+	} else {
+		uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(m,
+				c_op->digest.length);
+
+		if (!auth_tag)
+			return -1;
+
+		(*qp->mb_ops->gcm.dec)(&session->gdata, dst, src,
+				(uint64_t)c_op->data.to_cipher.length,
+				c_op->iv.data,
+				c_op->additional_auth.data,
+				(uint64_t)c_op->additional_auth.length,
+				auth_tag,
+				(uint64_t)c_op->digest.length);
+	}
+
+	return 0;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		else
+			c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	} else {
+		c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_mb_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->mb_ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_gcm_crypto_op(struct rte_mbuf *m, struct rte_crypto_op *c_op)
+{
+	struct aesni_mb_session *session =
+			(struct aesni_mb_session *)c_op->session->_private;
+
+	/* Verify digest if required */
+	if (session->cipher.direction == DECRYPT) {
+
+		uint8_t *auth_tag = rte_pktmbuf_mtod_offset(m, uint8_t *,
+				m->data_len - c_op->digest.length);
+
+		if (memcmp(auth_tag, c_op->digest.data,
+				c_op->digest.length) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		else
+			c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, c_op->digest.length);
+	} else {
+		c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed GCM request
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_gcm_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op)
+{
+	m = post_process_gcm_crypto_op(m, c_op);
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	rte_ring_enqueue(qp->processed_pkts, (void *)m);
+
+	return 0;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *c_op;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+	JOB_AES_HMAC *job = NULL;
+
+	int i, retval, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+		c_op = &ol->op.crypto;
+
+		sess = aesni_mb_get_session(qp, c_op);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		if (sess->gcm_session) {
+			retval = process_gcm_crypto_op(qp, bufs[i], c_op, sess);
+			if (retval < 0) {
+				qp->qp_stats.enqueue_err_count++;
+				goto flush_jobs;
+			}
+
+			handle_completed_gcm_crypto_op(qp, bufs[i], c_op);
+			processed_jobs++;
+		} else {
+			job = process_mb_crypto_op(qp, bufs[i], c_op, sess);
+			if (unlikely(job == NULL)) {
+				qp->qp_stats.enqueue_err_count++;
+				goto flush_jobs;
+			}
+
+			/* Submit Job */
+			job = (*qp->mb_ops->job.submit)(&qp->mb_mgr);
+
+			/*
+			 * If submit returns a processed job then handle it,
+			 * before submitting subsequent jobs
+			 */
+			if (job)
+				processed_jobs +=
+					handle_completed_mb_jobs(qp, job);
+		}
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/*
+	 * If we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling
+	 */
+	job = (*qp->mb_ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count +=
+				handle_completed_mb_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_qpairs = AESNI_MB_MAX_NB_QUEUE_PAIRS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..9be2498
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,297 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_queue_pairs = internals->max_nb_qpairs;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->mb_ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->mb_ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..3c0c9bc
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,232 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define AESNI_MB_NAME_MAX_LENGTH	(64)
+#define AESNI_MB_MAX_NB_QUEUE_PAIRS	(4)
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+
+	unsigned max_nb_qpairs;
+};
+
+struct aesni_mb_qp {
+	uint16_t id;				/**< Queue Pair Identifier */
+	char name[AESNI_MB_NAME_MAX_LENGTH];	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *mb_ops;	/**< Architecture dependent
+						 * function pointer table of
+						 * the multi-buffer APIs
+						 */
+	MB_MGR mb_mgr;				/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;	/**< Ring for placing process packets */
+
+	struct rte_mempool *sess_mp;		/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	unsigned gcm_session:1;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1))
+		 */
+	} cipher;
+
+	union {
+		/** Authentication Parameters */
+		struct {
+			JOB_HASH_ALG algo; /**< Authentication Algorithm */
+			union {
+				struct {
+					uint8_t inner[128] __rte_aligned(16);
+					/**< inner pad */
+					uint8_t outer[128] __rte_aligned(16);
+					/**< outer pad */
+				} pads;
+				/**< HMAC Authentication pads -
+				 * allocating space for the maximum pad
+				 * size supported which is 128 bytes for
+				 * SHA512
+				 */
+
+				struct {
+				    uint32_t k1_expanded[44] __rte_aligned(16);
+				    /**< k1 (expanded key). */
+				    uint8_t k2[16] __rte_aligned(16);
+				    /**< k2. */
+				    uint8_t k3[16] __rte_aligned(16);
+				    /**< k3. */
+				} xcbc;
+				/**< Expanded XCBC authentication keys */
+			};
+		} auth;
+
+		/** GCM parameters */
+		struct gcm_data gdata;
+	};
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cfcb064..4a660e6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -153,6 +153,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 5/6] app/test: add cryptodev unit and performance tests
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
                         ` (3 preceding siblings ...)
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-11-03 17:45       ` Declan Doherty
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 6/6] l2fwd-crypto: crypto Declan Doherty
                         ` (2 subsequent siblings)
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
---
 app/test/Makefile                  |    4 +
 app/test/test.c                    |   92 +-
 app/test/test.h                    |   34 +-
 app/test/test_cryptodev.c          | 1975 ++++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h          |   68 ++
 app/test/test_cryptodev_perf.c     | 2063 ++++++++++++++++++++++++++++++++++++
 app/test/test_link_bonding.c       |    6 +-
 app/test/test_link_bonding_mode4.c |    7 +-
 8 files changed, 4204 insertions(+), 45 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/app/test/Makefile b/app/test/Makefile
index de63235..ec33e1a 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -149,6 +149,10 @@ endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test.c b/app/test/test.c
index e8992f4..e58f266 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..e65fbbd
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1975 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+				"CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type)
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		/*
+		 * Since we can't free and re-allocate queue memory always set
+		 * the queues on this device up to max size first so enough
+		 * memory is allocated for any later re-configures needed by
+		 * other tests
+		 */
+
+		ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs =
+				(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < MAX_NUM_QPS_PER_QAT_DEVICE; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on "
+					"cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/*
+	 * Now reconfigure queues to size we actually want to use in this
+	 * test suite.
+	 */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/*
+	 * free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice
+	 */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count - 1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	ts_params->conf.session_mp.nb_objs = RTE_LIBRTE_PMD_QAT_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/*
+	 * Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created.
+	 */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights "
+				"%u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2);
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE;		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31,
+	0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E,
+	0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E,
+	0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0,
+	0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57,
+	0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9,
+	0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D,
+	0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46,
+	0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80,
+	0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5,
+	0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2,
+	0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA,
+	0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4,
+	0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4,
+	0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54,
+	0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91,
+	0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF,
+	0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28,
+	0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7,
+	0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6,
+	0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C,
+	0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6,
+	0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6,
+	0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87,
+	0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B,
+	0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53,
+	0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26,
+	0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36,
+	0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E,
+	0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A,
+	0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4,
+	0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1,
+	0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length =
+			DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+			ut_params) == TEST_SUCCESS,
+			"Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)
+		rte_pktmbuf_prepend(ut_params->ibuf,
+				CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600,
+			&stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats)
+			== -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	unsigned nb_sessions = gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD ?
+			RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+			RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+	struct rte_cryptodev_session *sessions[nb_sessions + 1];
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(
+				ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i],
+				"Session creation failed at session number %u",
+				i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request "
+				"number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i],
+			"Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				sessions[i]);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_testsuite  = {
+	.suite_name = "Crypto Device AESNI Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_testsuite);
+}
+
+static struct test_command cryptodev_aesni_cmd = {
+	.command = "cryptodev_aesni_autotest",
+	.callback = test_cryptodev_aesni,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..9564be5
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,2063 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/*
+	 * Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests
+	 */
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_QAT_PMD) ?
+					RTE_LIBRTE_PMD_QAT_MAX_SESSIONS :
+					RTE_LIBRTE_PMD_AESNI_MB_MAX_SESSIONS;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50,
+		0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36,
+		0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c,
+		0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6,
+		0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4,
+		0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d,
+		0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d,
+		0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5,
+		0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99,
+		0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6,
+		0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e,
+		0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97,
+		0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa,
+		0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36,
+		0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41,
+		0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7,
+		0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85,
+		0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5,
+		0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6,
+		0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe,
+		0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1,
+		0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e,
+		0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07,
+		0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26,
+		0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80,
+		0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76,
+		0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7,
+		0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf,
+		0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70,
+		0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26,
+		0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30,
+		0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64,
+		0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb,
+		0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c,
+		0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a,
+		0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19,
+		0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23,
+		0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf,
+		0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe,
+		0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d,
+		0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed,
+		0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36,
+		0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7,
+		0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3,
+		0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e,
+		0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49,
+		0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf,
+		0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12,
+		0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76,
+		0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08,
+		0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b,
+		0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9,
+		0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c,
+		0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77,
+		0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb,
+		0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86,
+		0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e,
+		0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b,
+		0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11,
+		0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c,
+		0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69,
+		0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1,
+		0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06,
+		0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e,
+		0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08,
+		0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe,
+		0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f,
+		0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23,
+		0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26,
+		0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c,
+		0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7,
+		0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb,
+		0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2,
+		0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38,
+		0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02,
+		0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28,
+		0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03,
+		0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07,
+		0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61,
+		0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef,
+		0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90,
+		0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0,
+		0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3,
+		0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0,
+		0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0,
+		0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7,
+		0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9,
+		0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d,
+		0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6,
+		0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43,
+		0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1,
+		0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d,
+		0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf,
+		0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b,
+		0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07,
+		0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53,
+		0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35,
+		0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6,
+		0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3,
+		0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40,
+		0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08,
+		0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed,
+		0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7,
+		0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f,
+		0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48,
+		0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68,
+		0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1,
+		0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9,
+		0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72,
+		0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26,
+		0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8,
+		0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25,
+		0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9,
+		0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7,
+		0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7,
+		0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab,
+		0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb,
+		0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54,
+		0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38,
+		0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22,
+		0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde,
+		0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc,
+		0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33,
+		0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13,
+		0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c,
+		0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99,
+		0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48,
+		0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90,
+		0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d,
+		0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d,
+		0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff,
+		0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1,
+		0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f,
+		0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6,
+		0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40,
+		0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48,
+		0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41,
+		0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca,
+		0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07,
+		0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9,
+		0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58,
+		0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39,
+		0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53,
+		0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2,
+		0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2,
+		0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83,
+		0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2,
+		0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54,
+		0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f,
+		0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c,
+		0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f,
+		0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e,
+		0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4,
+		0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf,
+		0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27,
+		0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18,
+		0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e,
+		0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63,
+		0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67,
+		0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda,
+		0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48,
+		0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6,
+		0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d,
+		0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac,
+		0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32,
+		0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7,
+		0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26,
+		0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b,
+		0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2,
+		0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a,
+		0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1,
+		0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24,
+		0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45,
+		0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89,
+		0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82,
+		0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1,
+		0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6,
+		0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3,
+		0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4,
+		0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39,
+		0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58,
+		0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe,
+		0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45,
+		0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5,
+		0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a,
+		0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12,
+		0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c,
+		0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f,
+		0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4,
+		0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04,
+		0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5,
+		0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d,
+		0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab,
+		0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e,
+		0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27,
+		0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f,
+		0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b,
+		0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd,
+		0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3,
+		0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2,
+		0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21,
+		0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff,
+		0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc,
+		0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8,
+		0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb,
+		0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10,
+		0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05,
+		0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23,
+		0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab,
+		0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88,
+		0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83,
+		0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74,
+		0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7,
+		0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18,
+		0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee,
+		0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f,
+		0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97,
+		0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17,
+		0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67,
+		0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b,
+		0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc,
+		0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e,
+		0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2,
+		0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92,
+		0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92,
+		0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b,
+		0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5,
+		0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5,
+		0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a,
+		0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69,
+		0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d,
+		0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14,
+		0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80,
+		0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0,
+		0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76,
+		0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd,
+		0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad,
+		0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61,
+		0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd,
+		0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67,
+		0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35,
+		0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd,
+		0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87,
+		0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda,
+		0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4,
+		0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30,
+		0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c,
+		0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5,
+		0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06,
+		0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f,
+		0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d,
+		0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9,
+		0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62,
+		0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53,
+		0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a,
+		0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0,
+		0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b,
+		0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8,
+		0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a,
+		0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a,
+		0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0,
+		0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9,
+		0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce,
+		0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5,
+		0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09,
+		0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b,
+		0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13,
+		0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24,
+		0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3,
+		0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3,
+		0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f,
+		0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8,
+		0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47,
+		0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d,
+		0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f,
+		0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6,
+		0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e,
+		0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a,
+		0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51,
+		0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e,
+		0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee,
+		0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3,
+		0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99,
+		0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3,
+		0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8,
+		0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf,
+		0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10,
+		0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c,
+		0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c,
+		0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d,
+		0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72,
+		0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0,
+		0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58,
+		0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe,
+		0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13,
+		0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b,
+		0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15,
+		0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85,
+		0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c,
+		0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8,
+		0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77,
+		0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42,
+		0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59,
+		0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32,
+		0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6,
+		0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb,
+		0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48,
+		0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2,
+		0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38,
+		0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f,
+		0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb,
+		0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7,
+		0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06,
+		0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61,
+		0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b,
+		0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29,
+		0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc,
+		0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0,
+		0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37,
+		0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26,
+		0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69,
+		0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba,
+		0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4,
+		0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba,
+		0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f,
+		0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77,
+		0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30,
+		0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70,
+		0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54,
+		0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e,
+		0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e,
+		0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3,
+		0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13,
+		0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda,
+		0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28,
+		0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5,
+		0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98,
+		0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2,
+		0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb,
+		0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83,
+		0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96,
+		0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76,
+		0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b,
+		0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59,
+		0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4,
+		0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9,
+		0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42,
+		0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf,
+		0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1,
+		0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48,
+		0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac,
+		0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11,
+		0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1,
+		0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f,
+		0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5,
+		0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19,
+		0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6,
+		0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19,
+		0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45,
+		0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20,
+		0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d,
+		0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7,
+		0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25,
+		0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42,
+		0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49,
+		0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9,
+		0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8,
+		0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86,
+		0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0,
+		0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc,
+		0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a,
+		0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04,
+		0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c,
+		0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92,
+		0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25,
+		0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11,
+		0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73,
+		0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96,
+		0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67,
+		0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2,
+		0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a,
+		0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e,
+		0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74,
+		0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef,
+		0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e,
+		0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e,
+		0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf,
+		0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3,
+		0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff,
+		0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9,
+		0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8,
+		0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e,
+		0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91,
+		0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f,
+		0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27,
+		0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d,
+		0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3,
+		0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e,
+		0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7,
+		0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f,
+		0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca,
+		0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f,
+		0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19,
+		0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea,
+		0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02,
+		0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04,
+		0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25,
+		0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84,
+		0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9,
+		0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0,
+		0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98,
+		0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51,
+		0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c,
+		0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a,
+		0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a,
+		0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38,
+		0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca,
+		0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b,
+		0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a,
+		0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06,
+		0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3,
+		0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e,
+		0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00,
+		0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b,
+		0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c,
+		0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa,
+		0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d,
+		0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a,
+		0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3,
+		0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58,
+		0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17,
+		0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4,
+		0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b,
+		0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92,
+		0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b,
+		0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98,
+		0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd,
+		0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d,
+		0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9,
+		0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02,
+		0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03,
+		0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07,
+		0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7,
+		0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96,
+		0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68,
+		0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1,
+		0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46,
+		0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37,
+		0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde,
+		0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0,
+		0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e,
+		0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d,
+		0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48,
+		0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43,
+		0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40,
+		0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d,
+		0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9,
+		0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c,
+		0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3,
+		0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34,
+		0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c,
+		0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07,
+		0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c,
+		0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81,
+		0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5,
+		0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde,
+		0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55,
+		0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f,
+		0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff,
+		0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62,
+		0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60,
+		0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4,
+		0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79,
+		0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9,
+		0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32,
+		0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c,
+		0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b,
+		0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a,
+		0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61,
+		0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca,
+		0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae,
+		0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c,
+		0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b,
+		0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e,
+		0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f,
+		0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05,
+		0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9,
+		0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a,
+		0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f,
+		0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34,
+		0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c,
+		0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e,
+		0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b,
+		0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44,
+		0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8,
+		0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b,
+		0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8,
+		0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf,
+		0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4,
+		0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8,
+		0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0,
+		0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59,
+		0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c,
+		0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75,
+		0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd,
+		0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6,
+		0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41,
+		0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9,
+		0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b,
+		0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe,
+		0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1,
+		0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0,
+		0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f,
+		0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf,
+		0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85,
+		0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12,
+		0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82,
+		0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83,
+		0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14,
+		0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c,
+		0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f,
+		0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1,
+		0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d,
+		0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5,
+		0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7,
+		0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85,
+		0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41,
+		0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0,
+		0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca,
+		0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83,
+		0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93,
+		0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6,
+		0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf,
+		0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c,
+		0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73,
+		0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb,
+		0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5,
+		0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24,
+		0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3,
+		0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a,
+		0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8,
+		0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca,
+		0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba,
+		0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a,
+		0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9,
+		0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b,
+		0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b,
+		0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b,
+		0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf,
+		0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69,
+		0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c,
+		0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3,
+		0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15,
+		0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68,
+		0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4,
+		0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2,
+		0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a,
+		0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45,
+		0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8,
+		0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9,
+		0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7,
+		0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34,
+		0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed,
+		0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf,
+		0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5,
+		0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e,
+		0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36,
+		0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40,
+		0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9,
+		0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa,
+		0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3,
+		0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22,
+		0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e,
+		0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89,
+		0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba,
+		0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97,
+		0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4,
+		0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f,
+		0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7,
+		0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6,
+		0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37,
+		0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76,
+		0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02,
+		0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b,
+		0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52,
+		0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f,
+		0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3,
+		0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83,
+		0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+	{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64],
+		{ AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+	{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128],
+		{ AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+	{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256],
+		{ AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+	{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512],
+		{ AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+	{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768],
+		{ AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+	{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024],
+		{ AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+	{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280],
+		{ AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+	{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536],
+		{ AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+	{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792],
+		{ AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+	{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048],
+		{ AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b],
+				data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+				CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC "
+			"algorithm with a constant request size of %u.",
+			data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle "
+			"cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost "
+			"(assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send "
+			"AES128_CBC_SHA256_HMAC requests with a constant burst "
+			"size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\t"
+			"Mrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext,
+					data_params[index].length,
+					0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+					0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0,
+				data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(
+			testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v4 6/6] l2fwd-crypto: crypto
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
                         ` (4 preceding siblings ...)
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-11-03 17:45       ` Declan Doherty
  2015-11-03 21:20       ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Sergio Gonzalez Monroy
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
  7 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-03 17:45 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1473 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1523 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..10ec513
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1473 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* default period is 10 seconds */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000;
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
                         ` (5 preceding siblings ...)
  2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 6/6] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-03 21:20       ` Sergio Gonzalez Monroy
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
  7 siblings, 0 replies; 115+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-11-03 21:20 UTC (permalink / raw)
  To: Declan Doherty, dev

On 03/11/2015 17:45, Declan Doherty wrote:
> This series of patches defines a set of application burst oriented APIs for
> asynchronous symmetric cryptographic functions within DPDK. It also contains a
> poll mode driver cryptographic device framework for the implementation of
> crypto devices within DPDK.
>
> In the patch set we also have included 2 reference implementations of crypto
> PMDs. Currently both implementations  support AES128-CBC with
> HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
>   software PMD based on Intel's multi-buffer library, which utilises both
> AES-NI instructions and vector operations to accelerate crypto operations and
> the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
> hardware accelerated crypto operations.
>
> The API set supports two functional modes of operation:
>
> 1, A session oriented mode. In this mode the user creates a crypto session
> which defines all the immutable data required to perform a particular crypto
> operation in advance, including cipher/hash algorithms and operations to be
> performed as well as the keys to used etc. The session is then referenced by
> the crypto operation data structure which is a data structure specific to each
> mbuf. It is contains all mutable data about the cryto operation to be
> performed, such as data offsets and lengths into the mbuf's data payload for
> cipher and hash operations to be performed.
>
> 2, A session-less mode. In this mode the user is able to provision crypto
> operations on an mbuf without the need to have a cached session created in
> advance, but at the cost of entailing the overhead of calculating
> authentication pre-computes and preforming key expansions in-line with the
> crypto operation. The crypto xform chain is directly attached to the op struct
> in this mode, so the op struct now contains all of the immutable crypto operation
> parameters that would be normally set within a session. Once all mutable and
> immutable parameters are set the crypto operation data structure can be attached
> to the specified mbuf and enqueued on a specified crypto device for processing.
>
> The patch set contains the following features:
> - Crypto device APIs and device framework
> - Implementation of a software crypto PMD based on multi-buffer library
> - Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
> - Unit and performance test's which give and example of utilising the crypto API's.
> - Sample application which performs crypto operations on the IP payload of the
>    packets being forwarded
>
> Current Status:
> There is no support for chained mbuf's and as mentioned above the PMD's
> have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
> and HMAC_SHA1/SHA256/SHA512.
>
> v4:
>   - Some more EOF whitespace and checkpatch fixes
>
> v3:
>   - Fixes a document build error, which I missed in the V2
>   - Fixes for remaining checkpatch errors
>   - Disables QAT and AESNI_MB PMD being build by default as they have external
>     library dependences
>
> v2:
>   - Introduces a new library to support attaching offload operations to a mbuf
>   - Remove unused APIs from cryptodev
>   - PMD code refactor due to new rte_mbuf_offload structure
>   - General bug fixes and code tidy up
>
>
> Declan Doherty (6):
>    cryptodev: Initial DPDK Crypto APIs and device framework release
>    mbuf_offload: library to support attaching offloads to a mbuf
>    qat_crypto_pmd: Addition of a new QAT DPDK PMD.
>    aesni_mb_pmd: Initial implementation of multi buffer based crypto
>      device
>    app/test: add cryptodev unit and performance tests
>    l2fwd-crypto: crypto
>
>
Series Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 00/10] Crypto API and device framework
  2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
                         ` (6 preceding siblings ...)
  2015-11-03 21:20       ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Sergio Gonzalez Monroy
@ 2015-11-09 20:34       ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
                           ` (10 more replies)
  7 siblings, 11 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

his series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.

v5:
 - Making ethdev marcos for function pointer and port id checking public and
   available for use in by the cryptodev. The intialise to patches combine changes
   from original cryptodev patch and discussion in
   http://dpdk.org/ml/archives/dev/2015-November/027871.html
 - Split out changes to create new __rte_packed and __rte_aligned macros 
   into seperate patches form the main cryptodev patch set for clairty
 - further code cleaning, removal of currently unsupported gcm code from
   aesni_mb pmd
v4:
 - Some more EOF whitespace and checkpatch fixes

v3:
 - Fixes a document build error, which I missed in the V2
 - Fixes for remaining checkpatch errors
 - Disables QAT and AESNI_MB PMD being build by default as they have external 
   library dependences 

v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to new rte_mbuf_offload structure
 - General bug fixes and code tidy up


Declan Doherty (10):
  ethdev: rename macros to have RTE_ prefix
  ethdev: make error checking macros public
  eal: add __rte_packed /__rte_aligned macros
  mbuf: add new marcos to get the physical address of data
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 MAINTAINERS                                        |   14 +
 app/test/Makefile                                  |    4 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1986 +++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 2062 ++++++++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 app/test/test_link_bonding_rssconf.c               |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   37 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  194 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   63 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  210 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  669 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  298 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  229 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  174 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 +++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  601 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  561 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  124 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  429 ++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  137 ++
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1473 ++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  613 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1092 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  649 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  549 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   41 +
 lib/librte_eal/common/include/rte_dev.h            |   52 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   12 +-
 lib/librte_ether/rte_ethdev.c                      |  607 +++---
 lib/librte_ether/rte_ethdev.h                      |   26 +
 lib/librte_mbuf/rte_mbuf.h                         |   29 +
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  284 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 59 files changed, 14828 insertions(+), 382 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-10 10:30           ` Bruce Richardson
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public Declan Doherty
                           ` (9 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

The macros to check that the function pointers and port ids are valid
for an ethdev are potentially useful to have in a common headers for
use with all PMDs. However, since they would then become externally
visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ether/rte_ethdev.c | 595 +++++++++++++++++++++---------------------
 1 file changed, 298 insertions(+), 297 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index e0e1dca..7387f65 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -70,58 +70,59 @@
 #include "rte_ethdev.h"
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define PMD_DEBUG_TRACE(fmt, args...) do {                        \
+#define RTE_PMD_DEBUG_TRACE(fmt, args...) do {do { \
 		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
 	} while (0)
 #else
-#define PMD_DEBUG_TRACE(fmt, args...)
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
 /* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define PROC_PRIMARY_OR_RET() do { \
+#define RTE_PROC_PRIMARY_OR_RET() do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define FUNC_PTR_OR_RET(func) do { \
+#define RTE_FUNC_PTR_OR_RET(func) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for valid port */
-#define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval;					\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) {  \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
 } while (0)
 
-#define VALID_PORTID_OR_RET(port_id) do {			\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return;						\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
 } while (0)
 
+
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
@@ -244,7 +245,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 
 	port_id = rte_eth_dev_find_free_port();
 	if (port_id == RTE_MAX_ETHPORTS) {
-		PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
+		RTE_PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
 		return NULL;
 	}
 
@@ -252,7 +253,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 		rte_eth_dev_data_alloc();
 
 	if (rte_eth_dev_allocated(name) != NULL) {
-		PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
+		RTE_PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
 				name);
 		return NULL;
 	}
@@ -339,7 +340,7 @@ rte_eth_dev_init(struct rte_pci_driver *pci_drv,
 	if (diag == 0)
 		return 0;
 
-	PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
+	RTE_PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
 			pci_drv->name,
 			(unsigned) pci_dev->id.vendor_id,
 			(unsigned) pci_dev->id.device_id);
@@ -447,10 +448,10 @@ rte_eth_dev_get_device_type(uint8_t port_id)
 static int
 rte_eth_dev_get_addr_by_port(uint8_t port_id, struct rte_pci_addr *addr)
 {
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -463,10 +464,10 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name)
 {
 	char *tmp;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -483,7 +484,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id)
 	int i;
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -509,7 +510,7 @@ rte_eth_dev_get_port_by_addr(const struct rte_pci_addr *addr, uint8_t *port_id)
 	struct rte_pci_device *pci_dev = NULL;
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -536,7 +537,7 @@ rte_eth_dev_is_detachable(uint8_t port_id)
 	uint32_t dev_flags;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
@@ -735,7 +736,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
 		rxq = dev->data->rx_queues;
 
@@ -766,20 +767,20 @@ rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -796,20 +797,20 @@ rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -826,20 +827,20 @@ rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -856,20 +857,20 @@ rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -895,7 +896,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
 		txq = dev->data->tx_queues;
 
@@ -929,19 +930,19 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of RX queues requested (%u) is greater than max supported(%d)\n",
 			nb_rx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of TX queues requested (%u) is greater than max supported(%d)\n",
 			nb_tx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
@@ -949,11 +950,11 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
@@ -965,22 +966,22 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
 	if (nb_rx_q > dev_info.max_rx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
 				port_id, nb_rx_q, dev_info.max_rx_queues);
 		return -EINVAL;
 	}
 	if (nb_rx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
 				port_id, nb_tx_q, dev_info.max_tx_queues);
 		return -EINVAL;
 	}
 	if (nb_tx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
@@ -993,7 +994,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	if ((dev_conf->intr_conf.lsc == 1) &&
 		(!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) {
-			PMD_DEBUG_TRACE("driver %s does not support lsc\n",
+			RTE_PMD_DEBUG_TRACE("driver %s does not support lsc\n",
 					dev->data->drv_name);
 			return -EINVAL;
 	}
@@ -1005,14 +1006,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (dev_conf->rxmode.jumbo_frame == 1) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" > max valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
 				(unsigned)dev_info.max_rx_pktlen);
 			return -EINVAL;
 		} else if (dev_conf->rxmode.max_rx_pkt_len < ETHER_MIN_LEN) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" < min valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
@@ -1032,14 +1033,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	diag = rte_eth_dev_rx_queue_config(dev, nb_rx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
 				port_id, diag);
 		return diag;
 	}
 
 	diag = rte_eth_dev_tx_queue_config(dev, nb_tx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		return diag;
@@ -1047,7 +1048,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		rte_eth_dev_tx_queue_config(dev, 0);
@@ -1086,7 +1087,7 @@ rte_eth_dev_config_restore(uint8_t port_id)
 			(dev->data->mac_pool_sel[i] & (1ULL << pool)))
 			(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
 		else {
-			PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
 					port_id);
 			/* exit the loop but not return an error */
 			break;
@@ -1114,16 +1115,16 @@ rte_eth_dev_start(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
 
 	if (dev->data->dev_started != 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already started\n",
 			port_id);
 		return 0;
@@ -1138,7 +1139,7 @@ rte_eth_dev_start(uint8_t port_id)
 	rte_eth_dev_config_restore(port_id);
 
 	if (dev->data->dev_conf.intr_conf.lsc == 0) {
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 	return 0;
@@ -1151,15 +1152,15 @@ rte_eth_dev_stop(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
 
 	if (dev->data->dev_started == 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already stopped\n",
 			port_id);
 		return;
@@ -1176,13 +1177,13 @@ rte_eth_dev_set_link_up(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_up)(dev);
 }
 
@@ -1193,13 +1194,13 @@ rte_eth_dev_set_link_down(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_down)(dev);
 }
 
@@ -1210,12 +1211,12 @@ rte_eth_dev_close(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_close)(dev);
 
@@ -1238,24 +1239,24 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
 
 	/*
 	 * Check the size of the mbuf data buffer.
@@ -1264,7 +1265,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	 */
 	rte_eth_dev_info_get(port_id, &dev_info);
 	if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
-		PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
+		RTE_PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
 				mp->name, (int) mp->private_data_size,
 				(int) sizeof(struct rte_pktmbuf_pool_private));
 		return -ENOSPC;
@@ -1272,7 +1273,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	mbp_buf_size = rte_pktmbuf_data_room_size(mp);
 
 	if ((mbp_buf_size - RTE_PKTMBUF_HEADROOM) < dev_info.min_rx_bufsize) {
-		PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
+		RTE_PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
 				"(RTE_PKTMBUF_HEADROOM=%d + min_rx_bufsize(dev)"
 				"=%d)\n",
 				mp->name,
@@ -1288,7 +1289,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
 			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
 
-		PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+		RTE_PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
 			"should be: <= %hu, = %hu, and a product of %hu\n",
 			nb_rx_desc,
 			dev_info.rx_desc_lim.nb_max,
@@ -1321,24 +1322,24 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
 
 	rte_eth_dev_info_get(port_id, &dev_info);
 
@@ -1354,10 +1355,10 @@ rte_eth_promiscuous_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
 	(*dev->dev_ops->promiscuous_enable)(dev);
 	dev->data->promiscuous = 1;
 }
@@ -1367,10 +1368,10 @@ rte_eth_promiscuous_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
 	dev->data->promiscuous = 0;
 	(*dev->dev_ops->promiscuous_disable)(dev);
 }
@@ -1380,7 +1381,7 @@ rte_eth_promiscuous_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->promiscuous;
@@ -1391,10 +1392,10 @@ rte_eth_allmulticast_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
 	(*dev->dev_ops->allmulticast_enable)(dev);
 	dev->data->all_multicast = 1;
 }
@@ -1404,10 +1405,10 @@ rte_eth_allmulticast_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
 	dev->data->all_multicast = 0;
 	(*dev->dev_ops->allmulticast_disable)(dev);
 }
@@ -1417,7 +1418,7 @@ rte_eth_allmulticast_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->all_multicast;
@@ -1442,13 +1443,13 @@ rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 1);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1459,13 +1460,13 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 0);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1476,12 +1477,12 @@ rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	memset(stats, 0, sizeof(*stats));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
 	(*dev->dev_ops->stats_get)(dev, stats);
 	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 	return 0;
@@ -1492,10 +1493,10 @@ rte_eth_stats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
@@ -1510,7 +1511,7 @@ rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstats *xstats,
 	signed xcount = 0;
 	uint64_t val, *stats_ptr;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
@@ -1590,7 +1591,7 @@ rte_eth_xstats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	/* implemented by the driver */
@@ -1609,11 +1610,11 @@ set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
 	return (*dev->dev_ops->queue_stats_mapping_set)
 			(dev, queue_id, stat_idx, is_rx);
 }
@@ -1647,14 +1648,14 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
 		.nb_align = 1,
 	};
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
 	dev_info->rx_desc_lim = lim;
 	dev_info->tx_desc_lim = lim;
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
 	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
 	dev_info->pci_dev = dev->pci_dev;
 	dev_info->driver_name = dev->data->drv_name;
@@ -1665,7 +1666,7 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 	ether_addr_copy(&dev->data->mac_addrs[0], mac_addr);
 }
@@ -1676,7 +1677,7 @@ rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	*mtu = dev->data->mtu;
@@ -1689,9 +1690,9 @@ rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu)
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
 
 	ret = (*dev->dev_ops->mtu_set)(dev, mtu);
 	if (!ret)
@@ -1705,19 +1706,19 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
-		PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
 
 	if (vlan_id > 4095) {
-		PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
+		RTE_PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
 				port_id, (unsigned) vlan_id);
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
 
 	return (*dev->dev_ops->vlan_filter_set)(dev, vlan_id, on);
 }
@@ -1727,14 +1728,14 @@ rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int o
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_strip_queue_set)(dev, rx_queue_id, on);
 
 	return 0;
@@ -1745,9 +1746,9 @@ rte_eth_dev_set_vlan_ether_type(uint8_t port_id, uint16_t tpid)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_tpid_set)(dev, tpid);
 
 	return 0;
@@ -1761,7 +1762,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	int mask = 0;
 	int cur, org = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	/*check which option changed by application*/
@@ -1790,7 +1791,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	if (mask == 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -1802,7 +1803,7 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	struct rte_eth_dev *dev;
 	int ret = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
@@ -1822,9 +1823,9 @@ rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_pvid_set)(dev, pvid, on);
 
 	return 0;
@@ -1835,9 +1836,9 @@ rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
 	memset(fc_conf, 0, sizeof(*fc_conf));
 	return (*dev->dev_ops->flow_ctrl_get)(dev, fc_conf);
 }
@@ -1847,14 +1848,14 @@ rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) {
-		PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
 	return (*dev->dev_ops->flow_ctrl_set)(dev, fc_conf);
 }
 
@@ -1863,9 +1864,9 @@ rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, struct rte_eth_pfc_conf *pfc
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
-		PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
 
@@ -1886,7 +1887,7 @@ rte_eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (reta_size != RTE_ALIGN(reta_size, RTE_RETA_GROUP_SIZE)) {
-		PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
+		RTE_PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
 							RTE_RETA_GROUP_SIZE);
 		return -EINVAL;
 	}
@@ -1911,7 +1912,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (max_rxq == 0) {
-		PMD_DEBUG_TRACE("No receive queue is available\n");
+		RTE_PMD_DEBUG_TRACE("No receive queue is available\n");
 		return -EINVAL;
 	}
 
@@ -1920,7 +1921,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
-			PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
+			RTE_PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
 				"the maximum rxq index: %u\n", idx, shift,
 				reta_conf[idx].reta[shift], max_rxq);
 			return -EINVAL;
@@ -1938,7 +1939,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	struct rte_eth_dev *dev;
 	int ret;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	/* Check mask bits */
 	ret = rte_eth_check_reta_mask(reta_conf, reta_size);
 	if (ret < 0)
@@ -1952,7 +1953,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	if (ret < 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
 	return (*dev->dev_ops->reta_update)(dev, reta_conf, reta_size);
 }
 
@@ -1965,7 +1966,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 	int ret;
 
 	if (port_id >= nb_ports) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
@@ -1975,7 +1976,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 		return ret;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
 	return (*dev->dev_ops->reta_query)(dev, reta_conf, reta_size);
 }
 
@@ -1985,16 +1986,16 @@ rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf)
 	struct rte_eth_dev *dev;
 	uint16_t rss_hash_protos;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	rss_hash_protos = rss_conf->rss_hf;
 	if ((rss_hash_protos != 0) &&
 	    ((rss_hash_protos & ETH_RSS_PROTO_MASK) == 0)) {
-		PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
+		RTE_PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
 				rss_hash_protos);
 		return -EINVAL;
 	}
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_update)(dev, rss_conf);
 }
 
@@ -2004,9 +2005,9 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_conf_get)(dev, rss_conf);
 }
 
@@ -2016,19 +2017,19 @@ rte_eth_dev_udp_tunnel_add(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel);
 }
 
@@ -2038,20 +2039,20 @@ rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel);
 }
 
@@ -2060,9 +2061,9 @@ rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_on)(dev);
 }
 
@@ -2071,9 +2072,9 @@ rte_eth_led_off(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_off)(dev);
 }
 
@@ -2107,17 +2108,17 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	int index;
 	uint64_t pool_mask;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
 
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
 	if (pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
+		RTE_PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -2125,7 +2126,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	if (index < 0) {
 		index = get_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 				port_id);
 			return -ENOSPC;
 		}
@@ -2155,13 +2156,13 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
 	struct rte_eth_dev *dev;
 	int index;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
 
 	index = get_mac_addr_index(port_id, addr);
 	if (index == 0) {
-		PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
 		return -EADDRINUSE;
 	} else if (index < 0)
 		return 0;  /* Do nothing if address wasn't found */
@@ -2183,13 +2184,13 @@ rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (!is_valid_assigned_ether_addr(addr))
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
 
 	/* Update default address in NIC data structure */
 	ether_addr_copy(addr, &dev->data->mac_addrs[0]);
@@ -2207,22 +2208,22 @@ rte_eth_dev_set_vf_rxmode(uint8_t port_id,  uint16_t vf,
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
 		return -EINVAL;
 	}
 
 	if (rx_mode == 0) {
-		PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx_mode)(dev, vf, rx_mode, on);
 }
 
@@ -2257,11 +2258,11 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
@@ -2273,20 +2274,20 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 
 	if (index < 0) {
 		if (!on) {
-			PMD_DEBUG_TRACE("port %d: the MAC address was not "
+			RTE_PMD_DEBUG_TRACE("port %d: the MAC address was not "
 				"set in UTA\n", port_id);
 			return -EINVAL;
 		}
 
 		index = get_hash_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 					port_id);
 			return -ENOSPC;
 		}
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
 	ret = (*dev->dev_ops->uc_hash_table_set)(dev, addr, on);
 	if (ret == 0) {
 		/* Update address in NIC data structure */
@@ -2306,11 +2307,11 @@ rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
 	return (*dev->dev_ops->uc_all_hash_table_set)(dev, on);
 }
 
@@ -2321,18 +2322,18 @@ rte_eth_dev_set_vf_rx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx)(dev, vf, on);
 }
 
@@ -2343,18 +2344,18 @@ rte_eth_dev_set_vf_tx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_tx)(dev, vf, on);
 }
 
@@ -2364,22 +2365,22 @@ rte_eth_dev_set_vf_vlan_filter(uint8_t port_id, uint16_t vlan_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
 	if (vlan_id > ETHER_MAX_VLAN_ID) {
-		PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
 			vlan_id);
 		return -EINVAL;
 	}
 
 	if (vf_mask == 0) {
-		PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_vlan_filter)(dev, vlan_id,
 						   vf_mask, vlan_on);
 }
@@ -2391,26 +2392,26 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx,
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_link link;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (queue_idx > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("set queue rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:port %d: "
 				"invalid queue id=%d\n", port_id, queue_idx);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 			tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_queue_rate_limit)(dev, queue_idx, tx_rate);
 }
 
@@ -2424,26 +2425,26 @@ int rte_eth_set_vf_rate_limit(uint8_t port_id, uint16_t vf, uint16_t tx_rate,
 	if (q_msk == 0)
 		return 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (vf > dev_info.max_vfs) {
-		PMD_DEBUG_TRACE("set VF rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:port %d: "
 				"invalid vf id=%d\n", port_id, vf);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 				tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rate_limit)(dev, vf, tx_rate, q_msk);
 }
 
@@ -2454,14 +2455,14 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (mirror_conf->rule_type == 0) {
-		PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
+		RTE_PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
 				ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
@@ -2469,18 +2470,18 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
 	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
-		PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
-		PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_set)(dev, mirror_conf, rule_id, on);
 }
@@ -2490,10 +2491,10 @@ rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_reset)(dev, rule_id);
 }
@@ -2505,12 +2506,12 @@ rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
@@ -2523,13 +2524,13 @@ rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id],
@@ -2541,10 +2542,10 @@ rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
 	return (*dev->dev_ops->rx_queue_count)(dev, queue_id);
 }
 
@@ -2553,10 +2554,10 @@ rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
 	return (*dev->dev_ops->rx_descriptor_done)(dev->data->rx_queues[queue_id],
 						   offset);
 }
@@ -2573,7 +2574,7 @@ rte_eth_dev_callback_register(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2613,7 +2614,7 @@ rte_eth_dev_callback_unregister(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2676,14 +2677,14 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
@@ -2691,7 +2692,7 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 		vec = intr_handle->intr_vec[qid];
 		rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 		if (rc && rc != -EEXIST) {
-			PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+			RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 					" op %d epfd %d vec %u\n",
 					port_id, qid, op, epfd, vec);
 		}
@@ -2710,26 +2711,26 @@ rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
 		return -EINVAL;
 	}
 
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
 	vec = intr_handle->intr_vec[queue_id];
 	rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 	if (rc && rc != -EEXIST) {
-		PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+		RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 				" op %d epfd %d vec %u\n",
 				port_id, queue_id, op, epfd, vec);
 		return rc;
@@ -2745,13 +2746,13 @@ rte_eth_dev_rx_intr_enable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_enable)(dev, queue_id);
 }
 
@@ -2762,13 +2763,13 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
 }
 
@@ -2777,10 +2778,10 @@ int rte_eth_dev_bypass_init(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
 	(*dev->dev_ops->bypass_init)(dev);
 	return 0;
 }
@@ -2790,10 +2791,10 @@ rte_eth_dev_bypass_state_show(uint8_t port_id, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_show)(dev, state);
 	return 0;
 }
@@ -2803,10 +2804,10 @@ rte_eth_dev_bypass_state_set(uint8_t port_id, uint32_t *new_state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_set)(dev, new_state);
 	return 0;
 }
@@ -2816,10 +2817,10 @@ rte_eth_dev_bypass_event_show(uint8_t port_id, uint32_t event, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_show)(dev, event, state);
 	return 0;
 }
@@ -2829,11 +2830,11 @@ rte_eth_dev_bypass_event_store(uint8_t port_id, uint32_t event, uint32_t state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_set)(dev, event, state);
 	return 0;
 }
@@ -2843,11 +2844,11 @@ rte_eth_dev_wd_timeout_store(uint8_t port_id, uint32_t timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_set)(dev, timeout);
 	return 0;
 }
@@ -2857,11 +2858,11 @@ rte_eth_dev_bypass_ver_show(uint8_t port_id, uint32_t *ver)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_ver_show)(dev, ver);
 	return 0;
 }
@@ -2871,11 +2872,11 @@ rte_eth_dev_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_show)(dev, wd_timeout);
 	return 0;
 }
@@ -2885,11 +2886,11 @@ rte_eth_dev_bypass_wd_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_reset)(dev);
 	return 0;
 }
@@ -2900,10 +2901,10 @@ rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type,
 				RTE_ETH_FILTER_NOP, NULL);
 }
@@ -2914,10 +2915,10 @@ rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type, filter_op, arg);
 }
 
@@ -3087,18 +3088,18 @@ rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
@@ -3111,18 +3112,18 @@ rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
@@ -3136,10 +3137,10 @@ rte_eth_dev_set_mc_addr_list(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
 	return dev->dev_ops->set_mc_addr_list(dev, mc_addr_set, nb_mc_addr);
 }
 
@@ -3148,10 +3149,10 @@ rte_eth_timesync_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_enable)(dev);
 }
 
@@ -3160,10 +3161,10 @@ rte_eth_timesync_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_disable)(dev);
 }
 
@@ -3173,10 +3174,10 @@ rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_rx_timestamp)(dev, timestamp, flags);
 }
 
@@ -3185,10 +3186,10 @@ rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_tx_timestamp)(dev, timestamp);
 }
 
@@ -3197,10 +3198,10 @@ rte_eth_dev_get_reg_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
 	return (*dev->dev_ops->get_reg_length)(dev);
 }
 
@@ -3209,10 +3210,10 @@ rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
 	return (*dev->dev_ops->get_reg)(dev, info);
 }
 
@@ -3221,10 +3222,10 @@ rte_eth_dev_get_eeprom_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom_length)(dev);
 }
 
@@ -3233,10 +3234,10 @@ rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom)(dev, info);
 }
 
@@ -3245,10 +3246,10 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
 
@@ -3259,14 +3260,14 @@ rte_eth_dev_get_dcb_info(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
 	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
 }
 
@@ -3274,7 +3275,7 @@ void
 rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev)
 {
 	if ((eth_dev == NULL) || (pci_dev == NULL)) {
-		PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
+		RTE_PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
 				eth_dev, pci_dev);
 	}
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-10 10:32           ` Bruce Richardson
  2015-11-10 15:50           ` Adrien Mazarguil
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
                           ` (8 subsequent siblings)
  10 siblings, 2 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

Move the function pointer and port id checking macros to rte_ethdev and
rte_dev header files, so that they can be used in the static inline
functions there. Also replace the RTE_LOG call within
RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_eal/common/include/rte_dev.h | 52 +++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
 lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
 3 files changed, 78 insertions(+), 54 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
index f601d21..fd09b3d 100644
--- a/lib/librte_eal/common/include/rte_dev.h
+++ b/lib/librte_eal/common/include/rte_dev.h
@@ -46,8 +46,60 @@
 extern "C" {
 #endif
 
+#include <stdio.h>
 #include <sys/queue.h>
 
+#include <rte_log.h>
+
+__attribute__((format(printf, 2, 0)))
+static inline void
+rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
+{
+	va_list ap;
+
+	va_start(ap, fmt);
+	char buffer[vsnprintf(NULL, 0, fmt, ap)];
+
+	va_end(ap);
+
+	va_start(ap, fmt);
+	vsnprintf(buffer, sizeof(buffer), fmt, ap);
+	va_end(ap);
+
+	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
+}
+
+/* Macros for checking for restricting functions to primary instance only */
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return; \
+	} \
+} while (0)
+
+/* Macros to check for invalid function pointers */
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return; \
+	} \
+} while (0)
+
+
 /** Double linked list of device drivers. */
 TAILQ_HEAD(rte_driver_list, rte_driver);
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 7387f65..d3c8aba 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -69,60 +69,6 @@
 #include "rte_ether.h"
 #include "rte_ethdev.h"
 
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define RTE_PMD_DEBUG_TRACE(fmt, args...) do {do { \
-		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
-	} while (0)
-#else
-#define RTE_PMD_DEBUG_TRACE(fmt, args...)
-#endif
-
-/* Macros for checking for restricting functions to primary instance only */
-#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for valid port */
-#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) {  \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval; \
-	} \
-} while (0)
-
-#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) { \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return; \
-	} \
-} while (0)
-
-
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 48a540d..9b07a0b 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -172,6 +172,8 @@ extern "C" {
 
 #include <stdint.h>
 
+#include <rte_dev.h>
+
 /* Use this macro to check if LRO API is supported */
 #define RTE_ETHDEV_HAS_LRO_SUPPORT
 
@@ -931,6 +933,30 @@ struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
 TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+
+/* Macros to check for valid port */
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
+} while (0)
+
 /*
  * Definitions of all functions exported by an Ethernet driver through the
  * the generic structure of type *eth_dev_ops* supplied in the *rte_eth_dev*
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 03/10] eal: add __rte_packed /__rte_aligned macros
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
                           ` (7 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

Adding a new marco for specifing __aligned__ attribute, and updating the
current __rte_cache_aligned macro to use it.

Also adding a new macro to specify the __packed__ attribute

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_eal/common/include/rte_memory.h | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..af688ef 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 04/10] mbuf: add new marcos to get the physical address of data
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (2 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                           ` (6 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..e203c55 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1622,6 +1622,29 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address that points to an offset of the
+ * start of the data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) \
+	((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address that points to the start of the
+ * data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (3 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                           ` (5 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                                    |    4 +
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  613 +++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1092 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  649 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  549 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   41 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 mk/rte.app.mk                                  |    1 +
 14 files changed, 3031 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index c8be5d2..68c6d74 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -196,6 +196,10 @@ M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
 F: scripts/test-null.sh
 
+Crypto API
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_cryptodev
+F: docs/guides/cryptodevs
 
 Drivers
 -------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index fba29e5..8803350 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 7248262..815bea3 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..81fa3fc
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
\ No newline at end of file
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..7cf0439
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,613 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid.
+	 */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both
+	 * keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto
+	 * operation data structures op_params parameter MUST be set for a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;
+	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;
+	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union
+ */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC)
+			  * mode of operation, this field specifies the start
+			  * of the AAD data in the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source
+			  * buffer that the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up
+		 * for the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by
+		 *   the implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth;
+	/**< Additional authentication parameters */
+
+	struct rte_mempool *pool;
+	/**< mempool used to allocate crypto op */
+
+	void *user_data;
+	/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..edd1320
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1092 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;		/**< Callback address */
+	void *cb_arg;				/**< Parameter for callback */
+	enum rte_cryptodev_event_type event;	/**< Interrupt event type */
+	uint32_t active;			/**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if ((strcmp(rte_cryptodev_globals->devs[i].data->name, name)
+				== 0) &&
+				(rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED))
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached ==
+				RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV,
+			rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket(
+						"cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/*
+	 * Register PCI driver for physical device intialisation during
+	 * PCI probing
+	 */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
+		int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_nb_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u",
+							nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+
+		qp = dev->data->queue_pairs;
+
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release,
+				-ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in use */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) +
+			priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if ((dev->data->session_pool->elt_size != elt_size) ||
+				(dev->data->session_pool->cache_size <
+				obj_cache_size) ||
+				(dev->data->session_pool->size < nb_objs)) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different"
+					" initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..e799447
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,649 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+
+	unsigned max_nb_queue_pairs;
+	/**< Maximum number of queues pairs supported by device. */
+	unsigned max_nb_sessions;
+	/**< Maximum number of sessions supported by device. */
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;	/**< Accumulated time processing operation */
+	uint64_t t_min;		/**< Max time */
+	uint64_t t_max;		/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;
+	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;
+	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;
+	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;
+	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf; /**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;
+	/**< Number of queue pairs to configure on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the
+ *				device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;
+	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;
+	/**< Driver for this device */
+	struct rte_cryptodev_data *data;
+	/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;
+	/**< Crypto device type */
+	enum pmd_type pmd_type;
+	/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t socket_id;
+	/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;
+	/**< Session memory pool */
+	void **queue_pairs;
+	/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;
+	/**< Number of device queue pairs. */
+
+	void *dev_private;
+	/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function
+ * until a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..1fbfc18
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,549 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+#ifdef RTE_LIBRTE_CRYPTOEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];
+	/**< Device private data */
+	uint8_t nb_devs;		/**< Number of devices found */
+	uint8_t max_devs;		/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	struct rte_cryptodev *dev;
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_cryptodev_globals->devs[i];
+			i < rte_cryptodev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_CRYPTODEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;
+	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;
+	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;
+	/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;
+	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;
+	/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;
+	/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;
+	/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;
+	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;
+	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;
+	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;
+	/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..31e04d2
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,41 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_virtual_dev_init;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 724efa7..5d382bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -118,6 +118,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (4 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                           ` (4 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 MAINTAINERS                                        |   4 +
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 291 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 10 files changed, 474 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 68c6d74..73d9578 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -191,6 +191,10 @@ F: lib/librte_mbuf/
 F: doc/guides/prog_guide/mbuf_lib.rst
 F: app/test/test_mbuf.c
 
+Packet buffer offload
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_mbuf_offload/
+
 Ethernet API
 M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 8803350..ba2533a 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -332,6 +332,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 815bea3..4c52f78 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -340,6 +340,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index e203c55..732516d 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -728,6 +728,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +844,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..ea97d16
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,291 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain
+ */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = m->offload_ops;
+
+	if (m->offload_ops != NULL && m->offload_ops->type == type)
+		return ol;
+
+	ol = m->offload_ops;
+	while (ol != NULL) {
+		if (ol->type == type)
+			return ol;
+
+		ol = ol->next;
+	}
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL
+ */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d382bb..2b8ddce 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (5 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                           ` (3 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 194 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 174 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 601 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 561 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 124 +++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 429 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 137 +++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |   9 +-
 mk/rte.app.mk                                      |   3 +
 22 files changed, 3628 insertions(+), 8 deletions(-)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index ba2533a..0068b20 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -155,6 +155,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 4c52f78..b29d3dd 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..9e24c07
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,194 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC** hardware accelerator. QAT PMD has
+current been tested on Fedora 21 64-bit with gcc and on the 4.3 kernel.org
+Linux kernel.
+
+
+Features
+--------
+QAT PMD has support for:
+
+Cipher algorithms:
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for "Installation using
+kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see "Installation using the
+01.org QAT driver".
+
+Installation using 01.org QAT driver
+------------------------------------
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+Compiling the 01.org driver - notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
+Installation using kernel.org driver
+------------------------------------
+
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+Steps below assume
+  * running DPDK on a platform with one DH895xCC device
+  * on a kernel at least version 4.3
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing:
+    lsmod | grep qat
+
+You should see the following output:
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+
+First find the bdf of the DH895xCC device:
+    lspci -d : 435
+
+You should see output similar to:
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs:
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use "lspci -d:443" to confirm
+the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:03:0${device}.${fn} > /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..f6aecea
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
\ No newline at end of file
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..47f1c91
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,174 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) \
+			= (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..498ee83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..fbf2b83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d4d8e4
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..76c08c0
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..ceaffb7
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,601 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *	  notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *	  notice, this list of conditions and the following disclaimer in
+ *	  the documentation and/or other materials provided with the
+ *	  distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *	  contributors may be used to endorse or promote products derived
+ *	  from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/*
+ * Returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword
+ */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(
+			ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ));
+				memset(out -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ),
+					0, ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	}
+	rte_memcpy(ipad, auth_key, auth_keylen);
+	rte_memcpy(opad, auth_key, auth_keylen);
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/*
+	 * State len is a multiple of 8, so may be larger than the digest.
+	 * Put the partial hash of opad state_len bytes after state1
+	 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(
+			cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert,
+			cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/*
+		 * Write (the length of AAD) into bytes 16-19 of state2
+		 * in big-endian format. This field is 8 bytes
+		 */
+		*(uint32_t *)&(hash->sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1),
+			&state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+					>> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated "
+				"encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..47b257f
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,561 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t shift);
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+			return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+					resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr +
+							queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length =
+			qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr =
+			qat_req->comn_mid.src_data_addr =
+					rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <=
+				sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array,
+				ol->op.crypto.iv.data, ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE *
+				ADF_NUM_BUNDLES_PER_DEV;
+
+		info->max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..d680364
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,124 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*
+ * This macro rounds up a number to a be a multiple of
+ * the alignment when the alignment is a power of 2
+ */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;
+	/* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t
+qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..ec5852d
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2
+/* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		}
+
+		PMD_DRV_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space "
+				"(UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >=
+			(ADF_NUM_SYM_QPS_PER_BUNDLE *
+					ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device",
+				queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+			socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
+			queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n",
+					queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size))
+			!= 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..bbaf1c8
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
\ No newline at end of file
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..e500c1e
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,137 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	struct qat_pmd_private *internals;
+
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_QAT_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
index ea97d16..e903b98 100644
--- a/lib/librte_mbuf_offload/rte_mbuf_offload.h
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -123,17 +123,10 @@ rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
 {
 	struct rte_mbuf_offload *ol = m->offload_ops;
 
-	if (m->offload_ops != NULL && m->offload_ops->type == type)
-		return ol;
-
-	ol = m->offload_ops;
-	while (ol != NULL) {
+	for (ol = m->offload_ops; ol != NULL; ol = ol->next)
 		if (ol->type == type)
 			return ol;
 
-		ol = ol->next;
-	}
-
 	return ol;
 }
 
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b8ddce..cfcb064 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -150,6 +150,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (6 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
                           ` (2 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                                        |   3 +
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   7 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 +++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  63 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 210 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 669 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 298 +++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 229 +++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 13 files changed, 1571 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 73d9578..2d5808c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,6 +303,9 @@ Null PMD
 M: Tetsuya Mukawa <mukawa@igel.co.jp>
 F: drivers/net/null/
 
+Crypto AES-NI Multi-Buffer PMD
+M: Declan Doherty <declan.doherty@intel.com>
+F: driver/crypto/aesni_mb
 
 Packet processing
 -----------------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 0068b20..a18e817 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index b29d3dd..d9c8c5c 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,13 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS=8
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index f6aecea..d07ee96 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..3bf83d1
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..0c119bf
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,210 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_192_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)
+		(void *key, void *exp_k1, void *k2, void *k3);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;
+		/**< Initialise scheduler  */
+		get_next_job_t get_next;
+		/**< Get next free job structure */
+		submit_job_t submit;
+		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job;
+		/**< Get completed job */
+		flush_job_t flush_job;
+		/**< flush jobs from manager */
+	} job;
+	/**< multi buffer manager functions */
+
+	struct {
+		struct {
+			md5_one_block_t md5;
+			/**< MD5 one block hash */
+			sha1_one_block_t sha1;
+			/**< SHA1 one block hash */
+			sha224_one_block_t sha224;
+			/**< SHA224 one block hash */
+			sha256_one_block_t sha256;
+			/**< SHA256 one block hash */
+			sha384_one_block_t sha384;
+			/**< SHA384 one block hash */
+			sha512_one_block_t sha512;
+			/**< SHA512 one block hash */
+		} one_block;
+		/**< one block hash functions */
+
+		struct {
+			aes_keyexp_128_t aes128;
+			/**< AES128 key expansions */
+			aes_keyexp_192_t aes192;
+			/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;
+			/**< AES256 key expansions */
+
+			aes_xcbc_expand_key_t aes_xcbc;
+			/**< AES XCBC key expansions */
+		} keyexp;
+		/**< Key expansion functions */
+	} aux;
+	/**< Auxiliary functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = {
+				NULL
+			},
+			.aux = {
+				.one_block = {
+					NULL
+				},
+				.keyexp = {
+					NULL
+				}
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+			.job = {
+				init_mb_mgr_avx,
+				get_next_job_avx,
+				submit_job_avx,
+				get_completed_job_avx,
+				flush_job_avx
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx,
+					sha1_one_block_avx,
+					sha224_one_block_avx,
+					sha256_one_block_avx,
+					sha384_one_block_avx,
+					sha512_one_block_avx
+				},
+				.keyexp = {
+					aes_keyexp_128_avx,
+					aes_keyexp_192_avx,
+					aes_keyexp_256_avx,
+					aes_xcbc_expand_key_avx
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+			.job = {
+				init_mb_mgr_avx2,
+				get_next_job_avx2,
+				submit_job_avx2,
+				get_completed_job_avx2,
+				flush_job_avx2
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx2,
+					sha1_one_block_avx2,
+					sha224_one_block_avx2,
+					sha256_one_block_avx2,
+					sha384_one_block_avx2,
+					sha512_one_block_avx2
+				},
+				.keyexp = {
+					aes_keyexp_128_avx2,
+					aes_keyexp_192_avx2,
+					aes_keyexp_256_avx2,
+					aes_xcbc_expand_key_avx2
+				}
+			}
+		}
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..d8ccf05
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,669 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/*
+	 * Multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together
+	 */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess,
+			cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/*
+	 * Multiple buffer library current only support returning a truncated
+	 * digest length as specified in the relevant IPsec RFCs
+	 */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* set status as successful by default */
+	c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+
+	JOB_AES_HMAC *job = NULL;
+
+	int i, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		sess = get_session(qp, &ol->op.crypto);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = process_crypto_op(qp, bufs[i], &ol->op.crypto, sess);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		/* Submit Job */
+		job = (*qp->ops->job.submit)(&qp->mb_mgr);
+
+		/*
+		 * If submit returns a processed job then handle it,
+		 * before submitting subsequent jobs
+		 */
+		if (job)
+			processed_jobs += handle_completed_jobs(qp, job);
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/*
+	 * If we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling
+	 */
+	job = (*qp->ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count += handle_completed_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_queue_pairs = RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS;
+	internals->max_nb_sessions = RTE_AESNI_MB_PMD_MAX_NB_SESSIONS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..96d22f6
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,298 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->max_nb_sessions = internals->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..2f98609
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,229 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+	/**< CPU vector instruction set mode */
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** AESNI Multi buffer queue pair */
+struct aesni_mb_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *ops;
+	/**< Vector mode dependent pointer table of the multi-buffer APIs */
+	MB_MGR mb_mgr;
+	/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1))
+		 */
+	} cipher;
+
+	/** Authentication Parameters */
+	struct {
+		JOB_HASH_ALG algo; /**< Authentication Algorithm */
+		union {
+			struct {
+				uint8_t inner[128] __rte_aligned(16);
+				/**< inner pad */
+				uint8_t outer[128] __rte_aligned(16);
+				/**< outer pad */
+			} pads;
+			/**< HMAC Authentication pads -
+			 * allocating space for the maximum pad
+			 * size supported which is 128 bytes for
+			 * SHA512
+			 */
+
+			struct {
+			    uint32_t k1_expanded[44] __rte_aligned(16);
+			    /**< k1 (expanded key). */
+			    uint8_t k2[16] __rte_aligned(16);
+			    /**< k2. */
+			    uint8_t k3[16] __rte_aligned(16);
+			    /**< k3. */
+			} xcbc;
+			/**< Expanded XCBC authentication keys */
+		};
+	} auth;
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cfcb064..4a660e6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -153,6 +153,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 09/10] app/test: add cryptodev unit and performance tests
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (7 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 10/10] l2fwd-crypto: crypto Declan Doherty
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                          |    2 +
 app/test/Makefile                    |    4 +
 app/test/test.c                      |   92 +-
 app/test/test.h                      |   34 +-
 app/test/test_cryptodev.c            | 1986 ++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h            |   68 ++
 app/test/test_cryptodev_perf.c       | 2062 ++++++++++++++++++++++++++++++++++
 app/test/test_link_bonding.c         |    6 +-
 app/test/test_link_bonding_mode4.c   |    7 +-
 app/test/test_link_bonding_rssconf.c |    7 +-
 10 files changed, 4219 insertions(+), 49 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d5808c..1f72f8c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -204,6 +204,8 @@ Crypto API
 M: Declan Doherty <declan.doherty@intel.com>
 F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
+F: app/test/test_cryptodev.c
+F: app/test/test_cryptodev_perf.c
 
 Drivers
 -------
diff --git a/app/test/Makefile b/app/test/Makefile
index de63235..ec33e1a 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -149,6 +149,10 @@ endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test.c b/app/test/test.c
index b94199a..f35b304 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..fd5b7ec
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1986 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+				"CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type)
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		rte_cryptodev_info_get(dev_id, &info);
+
+		/*
+		 * Since we can't free and re-allocate queue memory always set
+		 * the queues on this device up to max size first so enough
+		 * memory is allocated for any later re-configures needed by
+		 * other tests
+		 */
+
+		ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on "
+					"cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/*
+	 * Now reconfigure queues to size we actually want to use in this
+	 * test suite.
+	 */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/*
+	 * free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice
+	 */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count - 1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	ts_params->conf.session_mp.nb_objs = dev_info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/*
+	 * Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created.
+	 */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights "
+				"%u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2);
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE;		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31,
+	0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E,
+	0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E,
+	0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0,
+	0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57,
+	0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9,
+	0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D,
+	0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46,
+	0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80,
+	0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5,
+	0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2,
+	0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA,
+	0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4,
+	0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4,
+	0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54,
+	0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91,
+	0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF,
+	0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28,
+	0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7,
+	0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6,
+	0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C,
+	0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6,
+	0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6,
+	0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87,
+	0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B,
+	0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53,
+	0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26,
+	0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36,
+	0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E,
+	0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A,
+	0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4,
+	0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1,
+	0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length =
+			DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+			ut_params) == TEST_SUCCESS,
+			"Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)
+		rte_pktmbuf_prepend(ut_params->ibuf,
+				CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600,
+			&stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats)
+			== -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_session **sessions;
+
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	sessions = rte_malloc(NULL, (sizeof(struct rte_cryptodev_session *) *
+			dev_info.max_nb_sessions) + 1, 0);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < dev_info.max_nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(
+				ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i],
+				"Session creation failed at session number %u",
+				i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request "
+				"number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i],
+			"Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < dev_info.max_nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				sessions[i]);
+
+	rte_free(sessions);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_qat_testsuite  = {
+	.suite_name = "Crypto QAT Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_mb_testsuite  = {
+	.suite_name = "Crypto Device AESNI MB Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_qat_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_cmd = {
+	.command = "cryptodev_aesni_mb_autotest",
+	.callback = test_cryptodev_aesni_mb,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..f0cca8b
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,2062 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/*
+	 * Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests
+	 */
+
+	rte_cryptodev_info_get(ts_params->dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50,
+		0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36,
+		0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c,
+		0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6,
+		0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4,
+		0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d,
+		0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d,
+		0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5,
+		0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99,
+		0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6,
+		0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e,
+		0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97,
+		0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa,
+		0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36,
+		0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41,
+		0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7,
+		0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85,
+		0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5,
+		0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6,
+		0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe,
+		0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1,
+		0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e,
+		0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07,
+		0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26,
+		0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80,
+		0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76,
+		0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7,
+		0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf,
+		0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70,
+		0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26,
+		0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30,
+		0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64,
+		0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb,
+		0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c,
+		0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a,
+		0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19,
+		0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23,
+		0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf,
+		0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe,
+		0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d,
+		0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed,
+		0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36,
+		0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7,
+		0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3,
+		0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e,
+		0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49,
+		0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf,
+		0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12,
+		0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76,
+		0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08,
+		0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b,
+		0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9,
+		0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c,
+		0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77,
+		0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb,
+		0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86,
+		0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e,
+		0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b,
+		0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11,
+		0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c,
+		0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69,
+		0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1,
+		0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06,
+		0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e,
+		0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08,
+		0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe,
+		0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f,
+		0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23,
+		0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26,
+		0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c,
+		0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7,
+		0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb,
+		0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2,
+		0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38,
+		0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02,
+		0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28,
+		0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03,
+		0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07,
+		0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61,
+		0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef,
+		0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90,
+		0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0,
+		0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3,
+		0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0,
+		0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0,
+		0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7,
+		0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9,
+		0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d,
+		0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6,
+		0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43,
+		0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1,
+		0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d,
+		0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf,
+		0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b,
+		0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07,
+		0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53,
+		0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35,
+		0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6,
+		0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3,
+		0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40,
+		0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08,
+		0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed,
+		0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7,
+		0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f,
+		0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48,
+		0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68,
+		0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1,
+		0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9,
+		0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72,
+		0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26,
+		0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8,
+		0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25,
+		0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9,
+		0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7,
+		0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7,
+		0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab,
+		0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb,
+		0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54,
+		0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38,
+		0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22,
+		0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde,
+		0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc,
+		0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33,
+		0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13,
+		0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c,
+		0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99,
+		0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48,
+		0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90,
+		0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d,
+		0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d,
+		0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff,
+		0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1,
+		0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f,
+		0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6,
+		0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40,
+		0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48,
+		0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41,
+		0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca,
+		0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07,
+		0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9,
+		0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58,
+		0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39,
+		0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53,
+		0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2,
+		0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2,
+		0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83,
+		0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2,
+		0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54,
+		0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f,
+		0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c,
+		0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f,
+		0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e,
+		0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4,
+		0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf,
+		0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27,
+		0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18,
+		0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e,
+		0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63,
+		0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67,
+		0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda,
+		0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48,
+		0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6,
+		0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d,
+		0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac,
+		0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32,
+		0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7,
+		0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26,
+		0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b,
+		0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2,
+		0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a,
+		0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1,
+		0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24,
+		0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45,
+		0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89,
+		0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82,
+		0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1,
+		0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6,
+		0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3,
+		0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4,
+		0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39,
+		0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58,
+		0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe,
+		0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45,
+		0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5,
+		0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a,
+		0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12,
+		0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c,
+		0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f,
+		0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4,
+		0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04,
+		0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5,
+		0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d,
+		0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab,
+		0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e,
+		0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27,
+		0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f,
+		0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b,
+		0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd,
+		0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3,
+		0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2,
+		0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21,
+		0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff,
+		0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc,
+		0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8,
+		0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb,
+		0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10,
+		0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05,
+		0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23,
+		0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab,
+		0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88,
+		0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83,
+		0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74,
+		0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7,
+		0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18,
+		0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee,
+		0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f,
+		0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97,
+		0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17,
+		0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67,
+		0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b,
+		0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc,
+		0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e,
+		0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2,
+		0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92,
+		0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92,
+		0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b,
+		0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5,
+		0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5,
+		0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a,
+		0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69,
+		0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d,
+		0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14,
+		0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80,
+		0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0,
+		0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76,
+		0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd,
+		0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad,
+		0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61,
+		0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd,
+		0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67,
+		0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35,
+		0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd,
+		0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87,
+		0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda,
+		0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4,
+		0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30,
+		0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c,
+		0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5,
+		0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06,
+		0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f,
+		0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d,
+		0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9,
+		0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62,
+		0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53,
+		0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a,
+		0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0,
+		0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b,
+		0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8,
+		0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a,
+		0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a,
+		0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0,
+		0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9,
+		0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce,
+		0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5,
+		0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09,
+		0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b,
+		0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13,
+		0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24,
+		0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3,
+		0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3,
+		0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f,
+		0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8,
+		0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47,
+		0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d,
+		0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f,
+		0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6,
+		0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e,
+		0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a,
+		0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51,
+		0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e,
+		0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee,
+		0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3,
+		0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99,
+		0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3,
+		0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8,
+		0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf,
+		0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10,
+		0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c,
+		0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c,
+		0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d,
+		0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72,
+		0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0,
+		0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58,
+		0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe,
+		0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13,
+		0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b,
+		0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15,
+		0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85,
+		0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c,
+		0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8,
+		0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77,
+		0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42,
+		0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59,
+		0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32,
+		0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6,
+		0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb,
+		0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48,
+		0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2,
+		0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38,
+		0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f,
+		0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb,
+		0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7,
+		0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06,
+		0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61,
+		0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b,
+		0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29,
+		0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc,
+		0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0,
+		0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37,
+		0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26,
+		0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69,
+		0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba,
+		0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4,
+		0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba,
+		0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f,
+		0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77,
+		0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30,
+		0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70,
+		0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54,
+		0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e,
+		0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e,
+		0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3,
+		0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13,
+		0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda,
+		0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28,
+		0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5,
+		0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98,
+		0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2,
+		0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb,
+		0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83,
+		0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96,
+		0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76,
+		0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b,
+		0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59,
+		0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4,
+		0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9,
+		0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42,
+		0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf,
+		0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1,
+		0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48,
+		0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac,
+		0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11,
+		0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1,
+		0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f,
+		0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5,
+		0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19,
+		0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6,
+		0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19,
+		0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45,
+		0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20,
+		0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d,
+		0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7,
+		0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25,
+		0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42,
+		0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49,
+		0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9,
+		0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8,
+		0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86,
+		0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0,
+		0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc,
+		0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a,
+		0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04,
+		0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c,
+		0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92,
+		0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25,
+		0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11,
+		0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73,
+		0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96,
+		0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67,
+		0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2,
+		0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a,
+		0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e,
+		0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74,
+		0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef,
+		0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e,
+		0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e,
+		0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf,
+		0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3,
+		0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff,
+		0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9,
+		0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8,
+		0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e,
+		0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91,
+		0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f,
+		0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27,
+		0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d,
+		0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3,
+		0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e,
+		0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7,
+		0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f,
+		0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca,
+		0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f,
+		0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19,
+		0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea,
+		0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02,
+		0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04,
+		0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25,
+		0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84,
+		0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9,
+		0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0,
+		0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98,
+		0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51,
+		0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c,
+		0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a,
+		0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a,
+		0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38,
+		0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca,
+		0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b,
+		0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a,
+		0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06,
+		0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3,
+		0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e,
+		0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00,
+		0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b,
+		0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c,
+		0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa,
+		0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d,
+		0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a,
+		0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3,
+		0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58,
+		0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17,
+		0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4,
+		0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b,
+		0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92,
+		0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b,
+		0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98,
+		0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd,
+		0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d,
+		0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9,
+		0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02,
+		0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03,
+		0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07,
+		0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7,
+		0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96,
+		0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68,
+		0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1,
+		0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46,
+		0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37,
+		0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde,
+		0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0,
+		0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e,
+		0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d,
+		0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48,
+		0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43,
+		0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40,
+		0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d,
+		0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9,
+		0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c,
+		0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3,
+		0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34,
+		0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c,
+		0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07,
+		0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c,
+		0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81,
+		0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5,
+		0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde,
+		0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55,
+		0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f,
+		0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff,
+		0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62,
+		0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60,
+		0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4,
+		0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79,
+		0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9,
+		0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32,
+		0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c,
+		0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b,
+		0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a,
+		0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61,
+		0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca,
+		0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae,
+		0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c,
+		0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b,
+		0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e,
+		0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f,
+		0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05,
+		0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9,
+		0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a,
+		0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f,
+		0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34,
+		0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c,
+		0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e,
+		0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b,
+		0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44,
+		0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8,
+		0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b,
+		0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8,
+		0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf,
+		0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4,
+		0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8,
+		0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0,
+		0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59,
+		0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c,
+		0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75,
+		0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd,
+		0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6,
+		0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41,
+		0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9,
+		0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b,
+		0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe,
+		0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1,
+		0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0,
+		0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f,
+		0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf,
+		0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85,
+		0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12,
+		0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82,
+		0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83,
+		0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14,
+		0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c,
+		0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f,
+		0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1,
+		0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d,
+		0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5,
+		0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7,
+		0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85,
+		0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41,
+		0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0,
+		0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca,
+		0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83,
+		0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93,
+		0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6,
+		0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf,
+		0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c,
+		0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73,
+		0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb,
+		0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5,
+		0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24,
+		0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3,
+		0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a,
+		0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8,
+		0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca,
+		0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba,
+		0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a,
+		0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9,
+		0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b,
+		0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b,
+		0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b,
+		0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf,
+		0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69,
+		0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c,
+		0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3,
+		0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15,
+		0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68,
+		0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4,
+		0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2,
+		0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a,
+		0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45,
+		0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8,
+		0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9,
+		0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7,
+		0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34,
+		0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed,
+		0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf,
+		0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5,
+		0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e,
+		0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36,
+		0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40,
+		0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9,
+		0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa,
+		0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3,
+		0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22,
+		0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e,
+		0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89,
+		0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba,
+		0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97,
+		0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4,
+		0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f,
+		0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7,
+		0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6,
+		0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37,
+		0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76,
+		0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02,
+		0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b,
+		0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52,
+		0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f,
+		0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3,
+		0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83,
+		0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+	{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64],
+		{ AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+	{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128],
+		{ AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+	{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256],
+		{ AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+	{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512],
+		{ AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+	{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768],
+		{ AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+	{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024],
+		{ AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+	{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280],
+		{ AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+	{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536],
+		{ AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+	{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792],
+		{ AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+	{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048],
+		{ AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b],
+				data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+				CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC "
+			"algorithm with a constant request size of %u.",
+			data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle "
+			"cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost "
+			"(assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send "
+			"AES128_CBC_SHA256_HMAC requests with a constant burst "
+			"size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\t"
+			"Mrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext,
+					data_params[index].length,
+					0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+					0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0,
+				data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(
+			testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e6714b4..0a3162e 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -586,7 +586,7 @@ test_setup(void)
 	return TEST_SUCCESS;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -600,8 +600,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 static int
@@ -661,7 +659,8 @@ static struct unit_test_suite link_bonding_rssconf_test_suite  = {
 		TEST_CASE_NAMED("test_setup", test_setup_wrapper),
 		TEST_CASE_NAMED("test_rss", test_rss_wrapper),
 		TEST_CASE_NAMED("test_rss_lazy", test_rss_lazy_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END()
 	}
 };
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v5 10/10] l2fwd-crypto: crypto
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (8 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-11-09 20:34         ` Declan Doherty
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-09 20:34 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                    |    1 +
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1473 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1524 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 1f72f8c..fa85e55 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -206,6 +206,7 @@ F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
 F: app/test/test_cryptodev.c
 F: app/test/test_cryptodev_perf.c
+F: examples/l2fwd-crypto
 
 Drivers
 -------
diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..10ec513
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1473 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* default period is 10 seconds */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000;
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
@ 2015-11-10 10:30           ` Bruce Richardson
  0 siblings, 0 replies; 115+ messages in thread
From: Bruce Richardson @ 2015-11-10 10:30 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

On Mon, Nov 09, 2015 at 08:34:10PM +0000, Declan Doherty wrote:
> The macros to check that the function pointers and port ids are valid
> for an ethdev are potentially useful to have in a common headers for
> use with all PMDs. However, since they would then become externally
> visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.

Typo: appropriate

> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public Declan Doherty
@ 2015-11-10 10:32           ` Bruce Richardson
  2015-11-10 15:50           ` Adrien Mazarguil
  1 sibling, 0 replies; 115+ messages in thread
From: Bruce Richardson @ 2015-11-10 10:32 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

On Mon, Nov 09, 2015 at 08:34:11PM +0000, Declan Doherty wrote:
> Move the function pointer and port id checking macros to rte_ethdev and
> rte_dev header files, so that they can be used in the static inline
> functions there. Also replace the RTE_LOG call within
> RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Thanks, Declan. I'll rebase my other patchset that was very similar to this on
top of this patch and your previous one.

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

> ---
>  lib/librte_eal/common/include/rte_dev.h | 52 +++++++++++++++++++++++++++++++
>  lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
>  lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
>  3 files changed, 78 insertions(+), 54 deletions(-)
> 
> diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
> index f601d21..fd09b3d 100644
> --- a/lib/librte_eal/common/include/rte_dev.h
> +++ b/lib/librte_eal/common/include/rte_dev.h
> @@ -46,8 +46,60 @@
>  extern "C" {
>  #endif
>  
> +#include <stdio.h>
>  #include <sys/queue.h>
>  
> +#include <rte_log.h>
> +
> +__attribute__((format(printf, 2, 0)))
> +static inline void
> +rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
> +{
> +	va_list ap;
> +
> +	va_start(ap, fmt);
> +	char buffer[vsnprintf(NULL, 0, fmt, ap)];
> +
> +	va_end(ap);
> +
> +	va_start(ap, fmt);
> +	vsnprintf(buffer, sizeof(buffer), fmt, ap);
> +	va_end(ap);
> +
> +	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
> +}
> +
> +/* Macros for checking for restricting functions to primary instance only */
> +#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> +		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define RTE_PROC_PRIMARY_OR_RET() do { \
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> +		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> +		return; \
> +	} \
> +} while (0)
> +
> +/* Macros to check for invalid function pointers */
> +#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> +	if ((func) == NULL) { \
> +		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define RTE_FUNC_PTR_OR_RET(func) do { \
> +	if ((func) == NULL) { \
> +		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> +		return; \
> +	} \
> +} while (0)
> +
> +
>  /** Double linked list of device drivers. */
>  TAILQ_HEAD(rte_driver_list, rte_driver);
>  
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 7387f65..d3c8aba 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -69,60 +69,6 @@
>  #include "rte_ether.h"
>  #include "rte_ethdev.h"
>  
> -#ifdef RTE_LIBRTE_ETHDEV_DEBUG
> -#define RTE_PMD_DEBUG_TRACE(fmt, args...) do {do { \
> -		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
> -	} while (0)
> -#else
> -#define RTE_PMD_DEBUG_TRACE(fmt, args...)
> -#endif
> -
> -/* Macros for checking for restricting functions to primary instance only */
> -#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
> -	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> -		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> -		return (retval); \
> -	} \
> -} while (0)
> -
> -#define RTE_PROC_PRIMARY_OR_RET() do { \
> -	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> -		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> -		return; \
> -	} \
> -} while (0)
> -
> -/* Macros to check for invalid function pointers in dev_ops structure */
> -#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> -	if ((func) == NULL) { \
> -		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> -		return (retval); \
> -	} \
> -} while (0)
> -
> -#define RTE_FUNC_PTR_OR_RET(func) do { \
> -	if ((func) == NULL) { \
> -		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> -		return; \
> -	} \
> -} while (0)
> -
> -/* Macros to check for valid port */
> -#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
> -	if (!rte_eth_dev_is_valid_port(port_id)) {  \
> -		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> -		return retval; \
> -	} \
> -} while (0)
> -
> -#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
> -	if (!rte_eth_dev_is_valid_port(port_id)) { \
> -		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> -		return; \
> -	} \
> -} while (0)
> -
> -
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>  static struct rte_eth_dev_data *rte_eth_dev_data;
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 48a540d..9b07a0b 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -172,6 +172,8 @@ extern "C" {
>  
>  #include <stdint.h>
>  
> +#include <rte_dev.h>
> +
>  /* Use this macro to check if LRO API is supported */
>  #define RTE_ETHDEV_HAS_LRO_SUPPORT
>  
> @@ -931,6 +933,30 @@ struct rte_eth_dev_callback;
>  /** @internal Structure to keep track of registered callbacks */
>  TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
>  
> +
> +#ifdef RTE_LIBRTE_ETHDEV_DEBUG
> +#define RTE_PMD_DEBUG_TRACE(...) \
> +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> +#else
> +#define RTE_PMD_DEBUG_TRACE(fmt, args...)
> +#endif
> +
> +
> +/* Macros to check for valid port */
> +#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
> +	if (!rte_eth_dev_is_valid_port(port_id)) { \
> +		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
> +	if (!rte_eth_dev_is_valid_port(port_id)) { \
> +		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> +		return; \
> +	} \
> +} while (0)
> +
>  /*
>   * Definitions of all functions exported by an Ethernet driver through the
>   * the generic structure of type *eth_dev_ops* supplied in the *rte_eth_dev*
> -- 
> 2.4.3
> 

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public Declan Doherty
  2015-11-10 10:32           ` Bruce Richardson
@ 2015-11-10 15:50           ` Adrien Mazarguil
  2015-11-10 17:00             ` Declan Doherty
  1 sibling, 1 reply; 115+ messages in thread
From: Adrien Mazarguil @ 2015-11-10 15:50 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

On Mon, Nov 09, 2015 at 08:34:11PM +0000, Declan Doherty wrote:
> Move the function pointer and port id checking macros to rte_ethdev and
> rte_dev header files, so that they can be used in the static inline
> functions there. Also replace the RTE_LOG call within
> RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
>  lib/librte_eal/common/include/rte_dev.h | 52 +++++++++++++++++++++++++++++++
>  lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
>  lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
>  3 files changed, 78 insertions(+), 54 deletions(-)
> 
> diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
> index f601d21..fd09b3d 100644
> --- a/lib/librte_eal/common/include/rte_dev.h
> +++ b/lib/librte_eal/common/include/rte_dev.h
> @@ -46,8 +46,60 @@
>  extern "C" {
>  #endif
>  
> +#include <stdio.h>
>  #include <sys/queue.h>
>  
> +#include <rte_log.h>
> +
> +__attribute__((format(printf, 2, 0)))
> +static inline void
> +rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
> +{
> +	va_list ap;
> +
> +	va_start(ap, fmt);

I suggest adding an empty line here since we're mixing code and
declarations.

> +	char buffer[vsnprintf(NULL, 0, fmt, ap)];

I forgot an extra byte for trailing '\0' in my original comment, the above
line should read:

 char buffer[vsnprintf(NULL, 0, fmt, ap) + 1];

Otherwise the last character will be missing. Did you test that function?

> +
> +	va_end(ap);
> +
> +	va_start(ap, fmt);
> +	vsnprintf(buffer, sizeof(buffer), fmt, ap);
> +	va_end(ap);
> +
> +	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
> +}
> +
> +/* Macros for checking for restricting functions to primary instance only */
> +#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> +		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define RTE_PROC_PRIMARY_OR_RET() do { \
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> +		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> +		return; \
> +	} \
> +} while (0)
> +
> +/* Macros to check for invalid function pointers */
> +#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> +	if ((func) == NULL) { \
> +		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define RTE_FUNC_PTR_OR_RET(func) do { \
> +	if ((func) == NULL) { \
> +		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> +		return; \
> +	} \
> +} while (0)
> +
> +
>  /** Double linked list of device drivers. */
>  TAILQ_HEAD(rte_driver_list, rte_driver);
>  
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 7387f65..d3c8aba 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -69,60 +69,6 @@
>  #include "rte_ether.h"
>  #include "rte_ethdev.h"
>  
> -#ifdef RTE_LIBRTE_ETHDEV_DEBUG
> -#define RTE_PMD_DEBUG_TRACE(fmt, args...) do {do { \
> -		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
> -	} while (0)
> -#else
> -#define RTE_PMD_DEBUG_TRACE(fmt, args...)
> -#endif
> -
> -/* Macros for checking for restricting functions to primary instance only */
> -#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
> -	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> -		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> -		return (retval); \
> -	} \
> -} while (0)
> -
> -#define RTE_PROC_PRIMARY_OR_RET() do { \
> -	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
> -		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
> -		return; \
> -	} \
> -} while (0)
> -
> -/* Macros to check for invalid function pointers in dev_ops structure */
> -#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
> -	if ((func) == NULL) { \
> -		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> -		return (retval); \
> -	} \
> -} while (0)
> -
> -#define RTE_FUNC_PTR_OR_RET(func) do { \
> -	if ((func) == NULL) { \
> -		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
> -		return; \
> -	} \
> -} while (0)
> -
> -/* Macros to check for valid port */
> -#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
> -	if (!rte_eth_dev_is_valid_port(port_id)) {  \
> -		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> -		return retval; \
> -	} \
> -} while (0)
> -
> -#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
> -	if (!rte_eth_dev_is_valid_port(port_id)) { \
> -		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> -		return; \
> -	} \
> -} while (0)
> -
> -
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>  static struct rte_eth_dev_data *rte_eth_dev_data;
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 48a540d..9b07a0b 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -172,6 +172,8 @@ extern "C" {
>  
>  #include <stdint.h>
>  
> +#include <rte_dev.h>
> +
>  /* Use this macro to check if LRO API is supported */
>  #define RTE_ETHDEV_HAS_LRO_SUPPORT
>  
> @@ -931,6 +933,30 @@ struct rte_eth_dev_callback;
>  /** @internal Structure to keep track of registered callbacks */
>  TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
>  
> +
> +#ifdef RTE_LIBRTE_ETHDEV_DEBUG
> +#define RTE_PMD_DEBUG_TRACE(...) \
> +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> +#else
> +#define RTE_PMD_DEBUG_TRACE(fmt, args...)
> +#endif
> +
> +
> +/* Macros to check for valid port */
> +#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
> +	if (!rte_eth_dev_is_valid_port(port_id)) { \
> +		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> +		return retval; \
> +	} \
> +} while (0)
> +
> +#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
> +	if (!rte_eth_dev_is_valid_port(port_id)) { \
> +		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
> +		return; \
> +	} \
> +} while (0)
> +
>  /*
>   * Definitions of all functions exported by an Ethernet driver through the
>   * the generic structure of type *eth_dev_ops* supplied in the *rte_eth_dev*
> -- 
> 2.4.3
> 

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public
  2015-11-10 15:50           ` Adrien Mazarguil
@ 2015-11-10 17:00             ` Declan Doherty
  0 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:00 UTC (permalink / raw)
  To: dev, Bruce Richardson

On 10/11/15 15:50, Adrien Mazarguil wrote:
> On Mon, Nov 09, 2015 at 08:34:11PM +0000, Declan Doherty wrote:
>> Move the function pointer and port id checking macros to rte_ethdev and
>> rte_dev header files, so that they can be used in the static inline
>> functions there. Also replace the RTE_LOG call within
>> RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> ---
>>   lib/librte_eal/common/include/rte_dev.h | 52 +++++++++++++++++++++++++++++++
>>   lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
>>   lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
>>   3 files changed, 78 insertions(+), 54 deletions(-)
>>
>> diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
>> index f601d21..fd09b3d 100644
>> --- a/lib/librte_eal/common/include/rte_dev.h
>> +++ b/lib/librte_eal/common/include/rte_dev.h
>> @@ -46,8 +46,60 @@
>>   extern "C" {
>>   #endif
>>
>> +#include <stdio.h>
>>   #include <sys/queue.h>
>>
>> +#include <rte_log.h>
>> +
>> +__attribute__((format(printf, 2, 0)))
>> +static inline void
>> +rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
>> +{
>> +	va_list ap;
>> +
>> +	va_start(ap, fmt);
>
> I suggest adding an empty line here since we're mixing code and
> declarations.
>

sure no problem.

>> +	char buffer[vsnprintf(NULL, 0, fmt, ap)];
>
> I forgot an extra byte for trailing '\0' in my original comment, the above
> line should read:
>
>   char buffer[vsnprintf(NULL, 0, fmt, ap) + 1];
>
> Otherwise the last character will be missing. Did you test that function?
>

I didn't notice the truncation in the log message, I was just missing 
and "!" in the log message and didn't miss it. I'll push a new version 
with this fix.

>> +
>> +	va_end(ap);
>> +
>> +	va_start(ap, fmt);
>> +	vsnprintf(buffer, sizeof(buffer), fmt, ap);
>> +	va_end(ap);
>> +
>> +	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
>> +}
>> +
<snip>
>> --
>> 2.4.3
>>
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework
  2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
                           ` (9 preceding siblings ...)
  2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 10/10] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-10 17:32         ` Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
                             ` (10 more replies)
  10 siblings, 11 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.

v6:
 - Fix 32-bit build issue caused by casting in new rte_pktmbuf_mtophys_offset macro
 - Fix truncation of log message by new rte_pmd_debug_trace inline function

v5:
 - Making ethdev marcos for function pointer and port id checking public and
   available for use in by the cryptodev. The intialise to patches combine changes
   from original cryptodev patch and discussion in
   http://dpdk.org/ml/archives/dev/2015-November/027871.html
 - Split out changes to create new __rte_packed and __rte_aligned macros 
   into seperate patches form the main cryptodev patch set for clairty
 - further code cleaning, removal of currently unsupported gcm code from
   aesni_mb pmd
v4:
 - Some more EOF whitespace and checkpatch fixes

v3:
 - Fixes a document build error, which I missed in the V2
 - Fixes for remaining checkpatch errors
 - Disables QAT and AESNI_MB PMD being build by default as they have external 
   library dependences 

v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to new rte_mbuf_offload structure
 - General bug fixes and code tidy up


Declan Doherty (10):
  ethdev: rename macros to have RTE_ prefix
  ethdev: make error checking macros public
  eal: add __rte_packed /__rte_aligned macros
  mbuf: add new marcos to get the physical address of data
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 MAINTAINERS                                        |   14 +
 app/test/Makefile                                  |    4 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1986 +++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 2062 ++++++++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 app/test/test_link_bonding_rssconf.c               |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   37 +-
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  194 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   63 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  210 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  669 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  298 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  229 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  174 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 +++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  601 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  561 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  124 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  429 ++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  137 ++
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1473 ++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  613 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1092 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  649 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  549 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   41 +
 lib/librte_eal/common/include/rte_dev.h            |   53 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |  607 +++---
 lib/librte_ether/rte_ethdev.h                      |   26 +
 lib/librte_mbuf/rte_mbuf.h                         |   29 +
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  284 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 59 files changed, 14830 insertions(+), 383 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 02/10] ethdev: make error checking macros public Declan Doherty
                             ` (9 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

The macros to check that the function pointers and port ids are valid
for an ethdev are potentially useful to have in a common headers for
use with all PMDs. However, since they would then become externally
visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_ether/rte_ethdev.c | 595 +++++++++++++++++++++---------------------
 1 file changed, 298 insertions(+), 297 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index e0e1dca..3bb25e4 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -70,58 +70,59 @@
 #include "rte_ethdev.h"
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define PMD_DEBUG_TRACE(fmt, args...) do {                        \
+#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
 		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
 	} while (0)
 #else
-#define PMD_DEBUG_TRACE(fmt, args...)
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
 /* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define PROC_PRIMARY_OR_RET() do { \
+#define RTE_PROC_PRIMARY_OR_RET() do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define FUNC_PTR_OR_RET(func) do { \
+#define RTE_FUNC_PTR_OR_RET(func) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for valid port */
-#define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval;					\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) {  \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
 } while (0)
 
-#define VALID_PORTID_OR_RET(port_id) do {			\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return;						\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
 } while (0)
 
+
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
@@ -244,7 +245,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 
 	port_id = rte_eth_dev_find_free_port();
 	if (port_id == RTE_MAX_ETHPORTS) {
-		PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
+		RTE_PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
 		return NULL;
 	}
 
@@ -252,7 +253,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 		rte_eth_dev_data_alloc();
 
 	if (rte_eth_dev_allocated(name) != NULL) {
-		PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
+		RTE_PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
 				name);
 		return NULL;
 	}
@@ -339,7 +340,7 @@ rte_eth_dev_init(struct rte_pci_driver *pci_drv,
 	if (diag == 0)
 		return 0;
 
-	PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
+	RTE_PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
 			pci_drv->name,
 			(unsigned) pci_dev->id.vendor_id,
 			(unsigned) pci_dev->id.device_id);
@@ -447,10 +448,10 @@ rte_eth_dev_get_device_type(uint8_t port_id)
 static int
 rte_eth_dev_get_addr_by_port(uint8_t port_id, struct rte_pci_addr *addr)
 {
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -463,10 +464,10 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name)
 {
 	char *tmp;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -483,7 +484,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id)
 	int i;
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -509,7 +510,7 @@ rte_eth_dev_get_port_by_addr(const struct rte_pci_addr *addr, uint8_t *port_id)
 	struct rte_pci_device *pci_dev = NULL;
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -536,7 +537,7 @@ rte_eth_dev_is_detachable(uint8_t port_id)
 	uint32_t dev_flags;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
@@ -735,7 +736,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
 		rxq = dev->data->rx_queues;
 
@@ -766,20 +767,20 @@ rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -796,20 +797,20 @@ rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -826,20 +827,20 @@ rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -856,20 +857,20 @@ rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -895,7 +896,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
 		txq = dev->data->tx_queues;
 
@@ -929,19 +930,19 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of RX queues requested (%u) is greater than max supported(%d)\n",
 			nb_rx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of TX queues requested (%u) is greater than max supported(%d)\n",
 			nb_tx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
@@ -949,11 +950,11 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
@@ -965,22 +966,22 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
 	if (nb_rx_q > dev_info.max_rx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
 				port_id, nb_rx_q, dev_info.max_rx_queues);
 		return -EINVAL;
 	}
 	if (nb_rx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
 				port_id, nb_tx_q, dev_info.max_tx_queues);
 		return -EINVAL;
 	}
 	if (nb_tx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
@@ -993,7 +994,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	if ((dev_conf->intr_conf.lsc == 1) &&
 		(!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) {
-			PMD_DEBUG_TRACE("driver %s does not support lsc\n",
+			RTE_PMD_DEBUG_TRACE("driver %s does not support lsc\n",
 					dev->data->drv_name);
 			return -EINVAL;
 	}
@@ -1005,14 +1006,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (dev_conf->rxmode.jumbo_frame == 1) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" > max valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
 				(unsigned)dev_info.max_rx_pktlen);
 			return -EINVAL;
 		} else if (dev_conf->rxmode.max_rx_pkt_len < ETHER_MIN_LEN) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" < min valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
@@ -1032,14 +1033,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	diag = rte_eth_dev_rx_queue_config(dev, nb_rx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
 				port_id, diag);
 		return diag;
 	}
 
 	diag = rte_eth_dev_tx_queue_config(dev, nb_tx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		return diag;
@@ -1047,7 +1048,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		rte_eth_dev_tx_queue_config(dev, 0);
@@ -1086,7 +1087,7 @@ rte_eth_dev_config_restore(uint8_t port_id)
 			(dev->data->mac_pool_sel[i] & (1ULL << pool)))
 			(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
 		else {
-			PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
 					port_id);
 			/* exit the loop but not return an error */
 			break;
@@ -1114,16 +1115,16 @@ rte_eth_dev_start(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
 
 	if (dev->data->dev_started != 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already started\n",
 			port_id);
 		return 0;
@@ -1138,7 +1139,7 @@ rte_eth_dev_start(uint8_t port_id)
 	rte_eth_dev_config_restore(port_id);
 
 	if (dev->data->dev_conf.intr_conf.lsc == 0) {
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 	return 0;
@@ -1151,15 +1152,15 @@ rte_eth_dev_stop(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
 
 	if (dev->data->dev_started == 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already stopped\n",
 			port_id);
 		return;
@@ -1176,13 +1177,13 @@ rte_eth_dev_set_link_up(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_up)(dev);
 }
 
@@ -1193,13 +1194,13 @@ rte_eth_dev_set_link_down(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_down)(dev);
 }
 
@@ -1210,12 +1211,12 @@ rte_eth_dev_close(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_close)(dev);
 
@@ -1238,24 +1239,24 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
 
 	/*
 	 * Check the size of the mbuf data buffer.
@@ -1264,7 +1265,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	 */
 	rte_eth_dev_info_get(port_id, &dev_info);
 	if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
-		PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
+		RTE_PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
 				mp->name, (int) mp->private_data_size,
 				(int) sizeof(struct rte_pktmbuf_pool_private));
 		return -ENOSPC;
@@ -1272,7 +1273,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	mbp_buf_size = rte_pktmbuf_data_room_size(mp);
 
 	if ((mbp_buf_size - RTE_PKTMBUF_HEADROOM) < dev_info.min_rx_bufsize) {
-		PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
+		RTE_PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
 				"(RTE_PKTMBUF_HEADROOM=%d + min_rx_bufsize(dev)"
 				"=%d)\n",
 				mp->name,
@@ -1288,7 +1289,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
 			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
 
-		PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+		RTE_PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
 			"should be: <= %hu, = %hu, and a product of %hu\n",
 			nb_rx_desc,
 			dev_info.rx_desc_lim.nb_max,
@@ -1321,24 +1322,24 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
 
 	rte_eth_dev_info_get(port_id, &dev_info);
 
@@ -1354,10 +1355,10 @@ rte_eth_promiscuous_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
 	(*dev->dev_ops->promiscuous_enable)(dev);
 	dev->data->promiscuous = 1;
 }
@@ -1367,10 +1368,10 @@ rte_eth_promiscuous_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
 	dev->data->promiscuous = 0;
 	(*dev->dev_ops->promiscuous_disable)(dev);
 }
@@ -1380,7 +1381,7 @@ rte_eth_promiscuous_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->promiscuous;
@@ -1391,10 +1392,10 @@ rte_eth_allmulticast_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
 	(*dev->dev_ops->allmulticast_enable)(dev);
 	dev->data->all_multicast = 1;
 }
@@ -1404,10 +1405,10 @@ rte_eth_allmulticast_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
 	dev->data->all_multicast = 0;
 	(*dev->dev_ops->allmulticast_disable)(dev);
 }
@@ -1417,7 +1418,7 @@ rte_eth_allmulticast_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->all_multicast;
@@ -1442,13 +1443,13 @@ rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 1);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1459,13 +1460,13 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 0);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1476,12 +1477,12 @@ rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	memset(stats, 0, sizeof(*stats));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
 	(*dev->dev_ops->stats_get)(dev, stats);
 	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 	return 0;
@@ -1492,10 +1493,10 @@ rte_eth_stats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
@@ -1510,7 +1511,7 @@ rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstats *xstats,
 	signed xcount = 0;
 	uint64_t val, *stats_ptr;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
@@ -1590,7 +1591,7 @@ rte_eth_xstats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	/* implemented by the driver */
@@ -1609,11 +1610,11 @@ set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
 	return (*dev->dev_ops->queue_stats_mapping_set)
 			(dev, queue_id, stat_idx, is_rx);
 }
@@ -1647,14 +1648,14 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
 		.nb_align = 1,
 	};
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
 	dev_info->rx_desc_lim = lim;
 	dev_info->tx_desc_lim = lim;
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
 	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
 	dev_info->pci_dev = dev->pci_dev;
 	dev_info->driver_name = dev->data->drv_name;
@@ -1665,7 +1666,7 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 	ether_addr_copy(&dev->data->mac_addrs[0], mac_addr);
 }
@@ -1676,7 +1677,7 @@ rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	*mtu = dev->data->mtu;
@@ -1689,9 +1690,9 @@ rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu)
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
 
 	ret = (*dev->dev_ops->mtu_set)(dev, mtu);
 	if (!ret)
@@ -1705,19 +1706,19 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
-		PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
 
 	if (vlan_id > 4095) {
-		PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
+		RTE_PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
 				port_id, (unsigned) vlan_id);
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
 
 	return (*dev->dev_ops->vlan_filter_set)(dev, vlan_id, on);
 }
@@ -1727,14 +1728,14 @@ rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int o
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_strip_queue_set)(dev, rx_queue_id, on);
 
 	return 0;
@@ -1745,9 +1746,9 @@ rte_eth_dev_set_vlan_ether_type(uint8_t port_id, uint16_t tpid)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_tpid_set)(dev, tpid);
 
 	return 0;
@@ -1761,7 +1762,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	int mask = 0;
 	int cur, org = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	/*check which option changed by application*/
@@ -1790,7 +1791,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	if (mask == 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -1802,7 +1803,7 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	struct rte_eth_dev *dev;
 	int ret = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
@@ -1822,9 +1823,9 @@ rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_pvid_set)(dev, pvid, on);
 
 	return 0;
@@ -1835,9 +1836,9 @@ rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
 	memset(fc_conf, 0, sizeof(*fc_conf));
 	return (*dev->dev_ops->flow_ctrl_get)(dev, fc_conf);
 }
@@ -1847,14 +1848,14 @@ rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) {
-		PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
 	return (*dev->dev_ops->flow_ctrl_set)(dev, fc_conf);
 }
 
@@ -1863,9 +1864,9 @@ rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, struct rte_eth_pfc_conf *pfc
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
-		PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
 
@@ -1886,7 +1887,7 @@ rte_eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (reta_size != RTE_ALIGN(reta_size, RTE_RETA_GROUP_SIZE)) {
-		PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
+		RTE_PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
 							RTE_RETA_GROUP_SIZE);
 		return -EINVAL;
 	}
@@ -1911,7 +1912,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (max_rxq == 0) {
-		PMD_DEBUG_TRACE("No receive queue is available\n");
+		RTE_PMD_DEBUG_TRACE("No receive queue is available\n");
 		return -EINVAL;
 	}
 
@@ -1920,7 +1921,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
-			PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
+			RTE_PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
 				"the maximum rxq index: %u\n", idx, shift,
 				reta_conf[idx].reta[shift], max_rxq);
 			return -EINVAL;
@@ -1938,7 +1939,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	struct rte_eth_dev *dev;
 	int ret;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	/* Check mask bits */
 	ret = rte_eth_check_reta_mask(reta_conf, reta_size);
 	if (ret < 0)
@@ -1952,7 +1953,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	if (ret < 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
 	return (*dev->dev_ops->reta_update)(dev, reta_conf, reta_size);
 }
 
@@ -1965,7 +1966,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 	int ret;
 
 	if (port_id >= nb_ports) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
@@ -1975,7 +1976,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 		return ret;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
 	return (*dev->dev_ops->reta_query)(dev, reta_conf, reta_size);
 }
 
@@ -1985,16 +1986,16 @@ rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf)
 	struct rte_eth_dev *dev;
 	uint16_t rss_hash_protos;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	rss_hash_protos = rss_conf->rss_hf;
 	if ((rss_hash_protos != 0) &&
 	    ((rss_hash_protos & ETH_RSS_PROTO_MASK) == 0)) {
-		PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
+		RTE_PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
 				rss_hash_protos);
 		return -EINVAL;
 	}
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_update)(dev, rss_conf);
 }
 
@@ -2004,9 +2005,9 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_conf_get)(dev, rss_conf);
 }
 
@@ -2016,19 +2017,19 @@ rte_eth_dev_udp_tunnel_add(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel);
 }
 
@@ -2038,20 +2039,20 @@ rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel);
 }
 
@@ -2060,9 +2061,9 @@ rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_on)(dev);
 }
 
@@ -2071,9 +2072,9 @@ rte_eth_led_off(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_off)(dev);
 }
 
@@ -2107,17 +2108,17 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	int index;
 	uint64_t pool_mask;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
 
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
 	if (pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
+		RTE_PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -2125,7 +2126,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	if (index < 0) {
 		index = get_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 				port_id);
 			return -ENOSPC;
 		}
@@ -2155,13 +2156,13 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
 	struct rte_eth_dev *dev;
 	int index;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
 
 	index = get_mac_addr_index(port_id, addr);
 	if (index == 0) {
-		PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
 		return -EADDRINUSE;
 	} else if (index < 0)
 		return 0;  /* Do nothing if address wasn't found */
@@ -2183,13 +2184,13 @@ rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (!is_valid_assigned_ether_addr(addr))
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
 
 	/* Update default address in NIC data structure */
 	ether_addr_copy(addr, &dev->data->mac_addrs[0]);
@@ -2207,22 +2208,22 @@ rte_eth_dev_set_vf_rxmode(uint8_t port_id,  uint16_t vf,
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
 		return -EINVAL;
 	}
 
 	if (rx_mode == 0) {
-		PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx_mode)(dev, vf, rx_mode, on);
 }
 
@@ -2257,11 +2258,11 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
@@ -2273,20 +2274,20 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 
 	if (index < 0) {
 		if (!on) {
-			PMD_DEBUG_TRACE("port %d: the MAC address was not "
+			RTE_PMD_DEBUG_TRACE("port %d: the MAC address was not "
 				"set in UTA\n", port_id);
 			return -EINVAL;
 		}
 
 		index = get_hash_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 					port_id);
 			return -ENOSPC;
 		}
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
 	ret = (*dev->dev_ops->uc_hash_table_set)(dev, addr, on);
 	if (ret == 0) {
 		/* Update address in NIC data structure */
@@ -2306,11 +2307,11 @@ rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
 	return (*dev->dev_ops->uc_all_hash_table_set)(dev, on);
 }
 
@@ -2321,18 +2322,18 @@ rte_eth_dev_set_vf_rx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx)(dev, vf, on);
 }
 
@@ -2343,18 +2344,18 @@ rte_eth_dev_set_vf_tx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_tx)(dev, vf, on);
 }
 
@@ -2364,22 +2365,22 @@ rte_eth_dev_set_vf_vlan_filter(uint8_t port_id, uint16_t vlan_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
 	if (vlan_id > ETHER_MAX_VLAN_ID) {
-		PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
 			vlan_id);
 		return -EINVAL;
 	}
 
 	if (vf_mask == 0) {
-		PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_vlan_filter)(dev, vlan_id,
 						   vf_mask, vlan_on);
 }
@@ -2391,26 +2392,26 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx,
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_link link;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (queue_idx > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("set queue rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:port %d: "
 				"invalid queue id=%d\n", port_id, queue_idx);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 			tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_queue_rate_limit)(dev, queue_idx, tx_rate);
 }
 
@@ -2424,26 +2425,26 @@ int rte_eth_set_vf_rate_limit(uint8_t port_id, uint16_t vf, uint16_t tx_rate,
 	if (q_msk == 0)
 		return 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (vf > dev_info.max_vfs) {
-		PMD_DEBUG_TRACE("set VF rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:port %d: "
 				"invalid vf id=%d\n", port_id, vf);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 				tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rate_limit)(dev, vf, tx_rate, q_msk);
 }
 
@@ -2454,14 +2455,14 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (mirror_conf->rule_type == 0) {
-		PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
+		RTE_PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
 				ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
@@ -2469,18 +2470,18 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
 	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
-		PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
-		PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_set)(dev, mirror_conf, rule_id, on);
 }
@@ -2490,10 +2491,10 @@ rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_reset)(dev, rule_id);
 }
@@ -2505,12 +2506,12 @@ rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
@@ -2523,13 +2524,13 @@ rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id],
@@ -2541,10 +2542,10 @@ rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
 	return (*dev->dev_ops->rx_queue_count)(dev, queue_id);
 }
 
@@ -2553,10 +2554,10 @@ rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
 	return (*dev->dev_ops->rx_descriptor_done)(dev->data->rx_queues[queue_id],
 						   offset);
 }
@@ -2573,7 +2574,7 @@ rte_eth_dev_callback_register(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2613,7 +2614,7 @@ rte_eth_dev_callback_unregister(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2676,14 +2677,14 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
@@ -2691,7 +2692,7 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 		vec = intr_handle->intr_vec[qid];
 		rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 		if (rc && rc != -EEXIST) {
-			PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+			RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 					" op %d epfd %d vec %u\n",
 					port_id, qid, op, epfd, vec);
 		}
@@ -2710,26 +2711,26 @@ rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
 		return -EINVAL;
 	}
 
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
 	vec = intr_handle->intr_vec[queue_id];
 	rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 	if (rc && rc != -EEXIST) {
-		PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+		RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 				" op %d epfd %d vec %u\n",
 				port_id, queue_id, op, epfd, vec);
 		return rc;
@@ -2745,13 +2746,13 @@ rte_eth_dev_rx_intr_enable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_enable)(dev, queue_id);
 }
 
@@ -2762,13 +2763,13 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
 }
 
@@ -2777,10 +2778,10 @@ int rte_eth_dev_bypass_init(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
 	(*dev->dev_ops->bypass_init)(dev);
 	return 0;
 }
@@ -2790,10 +2791,10 @@ rte_eth_dev_bypass_state_show(uint8_t port_id, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_show)(dev, state);
 	return 0;
 }
@@ -2803,10 +2804,10 @@ rte_eth_dev_bypass_state_set(uint8_t port_id, uint32_t *new_state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_set)(dev, new_state);
 	return 0;
 }
@@ -2816,10 +2817,10 @@ rte_eth_dev_bypass_event_show(uint8_t port_id, uint32_t event, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_show)(dev, event, state);
 	return 0;
 }
@@ -2829,11 +2830,11 @@ rte_eth_dev_bypass_event_store(uint8_t port_id, uint32_t event, uint32_t state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_set)(dev, event, state);
 	return 0;
 }
@@ -2843,11 +2844,11 @@ rte_eth_dev_wd_timeout_store(uint8_t port_id, uint32_t timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_set)(dev, timeout);
 	return 0;
 }
@@ -2857,11 +2858,11 @@ rte_eth_dev_bypass_ver_show(uint8_t port_id, uint32_t *ver)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_ver_show)(dev, ver);
 	return 0;
 }
@@ -2871,11 +2872,11 @@ rte_eth_dev_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_show)(dev, wd_timeout);
 	return 0;
 }
@@ -2885,11 +2886,11 @@ rte_eth_dev_bypass_wd_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_reset)(dev);
 	return 0;
 }
@@ -2900,10 +2901,10 @@ rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type,
 				RTE_ETH_FILTER_NOP, NULL);
 }
@@ -2914,10 +2915,10 @@ rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type, filter_op, arg);
 }
 
@@ -3087,18 +3088,18 @@ rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
@@ -3111,18 +3112,18 @@ rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
@@ -3136,10 +3137,10 @@ rte_eth_dev_set_mc_addr_list(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
 	return dev->dev_ops->set_mc_addr_list(dev, mc_addr_set, nb_mc_addr);
 }
 
@@ -3148,10 +3149,10 @@ rte_eth_timesync_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_enable)(dev);
 }
 
@@ -3160,10 +3161,10 @@ rte_eth_timesync_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_disable)(dev);
 }
 
@@ -3173,10 +3174,10 @@ rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_rx_timestamp)(dev, timestamp, flags);
 }
 
@@ -3185,10 +3186,10 @@ rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_tx_timestamp)(dev, timestamp);
 }
 
@@ -3197,10 +3198,10 @@ rte_eth_dev_get_reg_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
 	return (*dev->dev_ops->get_reg_length)(dev);
 }
 
@@ -3209,10 +3210,10 @@ rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
 	return (*dev->dev_ops->get_reg)(dev, info);
 }
 
@@ -3221,10 +3222,10 @@ rte_eth_dev_get_eeprom_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom_length)(dev);
 }
 
@@ -3233,10 +3234,10 @@ rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom)(dev, info);
 }
 
@@ -3245,10 +3246,10 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
 
@@ -3259,14 +3260,14 @@ rte_eth_dev_get_dcb_info(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
 	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
 }
 
@@ -3274,7 +3275,7 @@ void
 rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev)
 {
 	if ((eth_dev == NULL) || (pci_dev == NULL)) {
-		PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
+		RTE_PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
 				eth_dev, pci_dev);
 	}
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 02/10] ethdev: make error checking macros public
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-10 17:38             ` Adrien Mazarguil
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
                             ` (8 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

Move the function pointer and port id checking macros to rte_ethdev and
rte_dev header files, so that they can be used in the static inline
functions there. Also replace the RTE_LOG call within
RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_eal/common/include/rte_dev.h | 53 ++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
 lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
 3 files changed, 79 insertions(+), 54 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
index f601d21..f1b5507 100644
--- a/lib/librte_eal/common/include/rte_dev.h
+++ b/lib/librte_eal/common/include/rte_dev.h
@@ -46,8 +46,61 @@
 extern "C" {
 #endif
 
+#include <stdio.h>
 #include <sys/queue.h>
 
+#include <rte_log.h>
+
+__attribute__((format(printf, 2, 0)))
+static inline void
+rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
+{
+	va_list ap;
+
+	va_start(ap, fmt);
+
+	char buffer[vsnprintf(NULL, 0, fmt, ap) + 1];
+
+	va_end(ap);
+
+	va_start(ap, fmt);
+	vsnprintf(buffer, sizeof(buffer), fmt, ap);
+	va_end(ap);
+
+	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
+}
+
+/* Macros for checking for restricting functions to primary instance only */
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return; \
+	} \
+} while (0)
+
+/* Macros to check for invalid function pointers */
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return; \
+	} \
+} while (0)
+
+
 /** Double linked list of device drivers. */
 TAILQ_HEAD(rte_driver_list, rte_driver);
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 3bb25e4..d3c8aba 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -69,60 +69,6 @@
 #include "rte_ether.h"
 #include "rte_ethdev.h"
 
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
-		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
-	} while (0)
-#else
-#define RTE_PMD_DEBUG_TRACE(fmt, args...)
-#endif
-
-/* Macros for checking for restricting functions to primary instance only */
-#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for valid port */
-#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) {  \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval; \
-	} \
-} while (0)
-
-#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) { \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return; \
-	} \
-} while (0)
-
-
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 48a540d..9b07a0b 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -172,6 +172,8 @@ extern "C" {
 
 #include <stdint.h>
 
+#include <rte_dev.h>
+
 /* Use this macro to check if LRO API is supported */
 #define RTE_ETHDEV_HAS_LRO_SUPPORT
 
@@ -931,6 +933,30 @@ struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
 TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+
+/* Macros to check for valid port */
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
+} while (0)
+
 /*
  * Definitions of all functions exported by an Ethernet driver through the
  * the generic structure of type *eth_dev_ops* supplied in the *rte_eth_dev*
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 02/10] ethdev: make error checking macros public Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-13 15:35             ` Thomas Monjalon
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
                             ` (7 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

Adding a new marco for specifing __aligned__ attribute, and updating the
current __rte_cache_aligned macro to use it.

Also adding a new macro to specify the __packed__ attribute

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_eal/common/include/rte_memory.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..18fd952 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 04/10] mbuf: add new marcos to get the physical address of data
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (2 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                             ` (6 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..ef1ee26 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1622,6 +1622,29 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address that points to an offset of the
+ * start of the data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) \
+	(phys_addr_t)((m)->buf_physaddr + (m)->data_off + (o))
+
+/**
+ * A macro that returns the physical address that points to the start of the
+ * data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (3 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-13 15:44             ` Thomas Monjalon
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                             ` (5 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                                    |    4 +
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  613 +++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1092 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  649 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  549 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   41 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 mk/rte.app.mk                                  |    1 +
 14 files changed, 3031 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index c8be5d2..68c6d74 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -196,6 +196,10 @@ M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
 F: scripts/test-null.sh
 
+Crypto API
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_cryptodev
+F: docs/guides/cryptodevs
 
 Drivers
 -------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index fba29e5..8803350 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 7248262..815bea3 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..81fa3fc
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
\ No newline at end of file
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..7cf0439
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,613 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid.
+	 */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both
+	 * keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto
+	 * operation data structures op_params parameter MUST be set for a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;
+	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;
+	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union
+ */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC)
+			  * mode of operation, this field specifies the start
+			  * of the AAD data in the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source
+			  * buffer that the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up
+		 * for the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by
+		 *   the implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth;
+	/**< Additional authentication parameters */
+
+	struct rte_mempool *pool;
+	/**< mempool used to allocate crypto op */
+
+	void *user_data;
+	/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..edd1320
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1092 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;		/**< Callback address */
+	void *cb_arg;				/**< Parameter for callback */
+	enum rte_cryptodev_event_type event;	/**< Interrupt event type */
+	uint32_t active;			/**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if ((strcmp(rte_cryptodev_globals->devs[i].data->name, name)
+				== 0) &&
+				(rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED))
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached ==
+				RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV,
+			rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket(
+						"cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/*
+	 * Register PCI driver for physical device intialisation during
+	 * PCI probing
+	 */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
+		int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_nb_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u",
+							nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+
+		qp = dev->data->queue_pairs;
+
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release,
+				-ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in use */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) +
+			priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if ((dev->data->session_pool->elt_size != elt_size) ||
+				(dev->data->session_pool->cache_size <
+				obj_cache_size) ||
+				(dev->data->session_pool->size < nb_objs)) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different"
+					" initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..e799447
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,649 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+
+	unsigned max_nb_queue_pairs;
+	/**< Maximum number of queues pairs supported by device. */
+	unsigned max_nb_sessions;
+	/**< Maximum number of sessions supported by device. */
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;	/**< Accumulated time processing operation */
+	uint64_t t_min;		/**< Max time */
+	uint64_t t_max;		/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;
+	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;
+	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;
+	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;
+	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf; /**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;
+	/**< Number of queue pairs to configure on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the
+ *				device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;
+	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;
+	/**< Driver for this device */
+	struct rte_cryptodev_data *data;
+	/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;
+	/**< Crypto device type */
+	enum pmd_type pmd_type;
+	/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t socket_id;
+	/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;
+	/**< Session memory pool */
+	void **queue_pairs;
+	/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;
+	/**< Number of device queue pairs. */
+
+	void *dev_private;
+	/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function
+ * until a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..d5fbe44
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,549 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];
+	/**< Device private data */
+	uint8_t nb_devs;		/**< Number of devices found */
+	uint8_t max_devs;		/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	struct rte_cryptodev *dev;
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_cryptodev_globals->devs[i];
+			i < rte_cryptodev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_CRYPTODEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;
+	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;
+	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;
+	/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;
+	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;
+	/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;
+	/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;
+	/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;
+	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;
+	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;
+	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;
+	/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..31e04d2
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,41 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_virtual_dev_init;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 724efa7..5d382bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -118,6 +118,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (4 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-13 15:59             ` Thomas Monjalon
  2015-11-13 16:11             ` Thomas Monjalon
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                             ` (4 subsequent siblings)
  10 siblings, 2 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 MAINTAINERS                                        |   4 +
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 291 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 10 files changed, 474 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 68c6d74..73d9578 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -191,6 +191,10 @@ F: lib/librte_mbuf/
 F: doc/guides/prog_guide/mbuf_lib.rst
 F: app/test/test_mbuf.c
 
+Packet buffer offload
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_mbuf_offload/
+
 Ethernet API
 M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 8803350..ba2533a 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -332,6 +332,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 815bea3..4c52f78 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -340,6 +340,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ef1ee26..0b6741a 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -728,6 +728,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +844,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..ea97d16
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,291 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain
+ */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = m->offload_ops;
+
+	if (m->offload_ops != NULL && m->offload_ops->type == type)
+		return ol;
+
+	ol = m->offload_ops;
+	while (ol != NULL) {
+		if (ol->type == type)
+			return ol;
+
+		ol = ol->next;
+	}
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL
+ */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d382bb..2b8ddce 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (5 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-13 16:00             ` Thomas Monjalon
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                             ` (3 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 194 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 174 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 601 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 561 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 124 +++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 429 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 137 +++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |   9 +-
 mk/rte.app.mk                                      |   3 +
 22 files changed, 3628 insertions(+), 8 deletions(-)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index ba2533a..0068b20 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -155,6 +155,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 4c52f78..b29d3dd 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..9e24c07
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,194 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC** hardware accelerator. QAT PMD has
+current been tested on Fedora 21 64-bit with gcc and on the 4.3 kernel.org
+Linux kernel.
+
+
+Features
+--------
+QAT PMD has support for:
+
+Cipher algorithms:
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for "Installation using
+kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see "Installation using the
+01.org QAT driver".
+
+Installation using 01.org QAT driver
+------------------------------------
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+Compiling the 01.org driver - notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
+Installation using kernel.org driver
+------------------------------------
+
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+Steps below assume
+  * running DPDK on a platform with one DH895xCC device
+  * on a kernel at least version 4.3
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing:
+    lsmod | grep qat
+
+You should see the following output:
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+
+First find the bdf of the DH895xCC device:
+    lspci -d : 435
+
+You should see output similar to:
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs:
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use "lspci -d:443" to confirm
+the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:03:0${device}.${fn} > /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..f6aecea
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
\ No newline at end of file
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..47f1c91
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,174 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) \
+			= (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..498ee83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..fbf2b83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d4d8e4
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..76c08c0
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..ceaffb7
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,601 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *	  notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *	  notice, this list of conditions and the following disclaimer in
+ *	  the documentation and/or other materials provided with the
+ *	  distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *	  contributors may be used to endorse or promote products derived
+ *	  from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/*
+ * Returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword
+ */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(
+			ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ));
+				memset(out -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ),
+					0, ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	}
+	rte_memcpy(ipad, auth_key, auth_keylen);
+	rte_memcpy(opad, auth_key, auth_keylen);
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/*
+	 * State len is a multiple of 8, so may be larger than the digest.
+	 * Put the partial hash of opad state_len bytes after state1
+	 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(
+			cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert,
+			cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/*
+		 * Write (the length of AAD) into bytes 16-19 of state2
+		 * in big-endian format. This field is 8 bytes
+		 */
+		*(uint32_t *)&(hash->sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1),
+			&state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+					>> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated "
+				"encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..47b257f
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,561 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t shift);
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+			return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+					resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr +
+							queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length =
+			qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr =
+			qat_req->comn_mid.src_data_addr =
+					rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <=
+				sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array,
+				ol->op.crypto.iv.data, ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE *
+				ADF_NUM_BUNDLES_PER_DEV;
+
+		info->max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..d680364
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,124 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*
+ * This macro rounds up a number to a be a multiple of
+ * the alignment when the alignment is a power of 2
+ */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;
+	/* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t
+qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..ec5852d
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2
+/* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		}
+
+		PMD_DRV_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space "
+				"(UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >=
+			(ADF_NUM_SYM_QPS_PER_BUNDLE *
+					ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device",
+				queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+			socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
+			queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n",
+					queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size))
+			!= 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..bbaf1c8
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
\ No newline at end of file
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..e500c1e
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,137 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	struct qat_pmd_private *internals;
+
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_QAT_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
index ea97d16..e903b98 100644
--- a/lib/librte_mbuf_offload/rte_mbuf_offload.h
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -123,17 +123,10 @@ rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
 {
 	struct rte_mbuf_offload *ol = m->offload_ops;
 
-	if (m->offload_ops != NULL && m->offload_ops->type == type)
-		return ol;
-
-	ol = m->offload_ops;
-	while (ol != NULL) {
+	for (ol = m->offload_ops; ol != NULL; ol = ol->next)
 		if (ol->type == type)
 			return ol;
 
-		ol = ol->next;
-	}
-
 	return ol;
 }
 
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b8ddce..cfcb064 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -150,6 +150,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (6 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
                             ` (2 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                                        |   3 +
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   7 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 +++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  63 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 210 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 669 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 298 +++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 229 +++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 13 files changed, 1571 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 73d9578..2d5808c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,6 +303,9 @@ Null PMD
 M: Tetsuya Mukawa <mukawa@igel.co.jp>
 F: drivers/net/null/
 
+Crypto AES-NI Multi-Buffer PMD
+M: Declan Doherty <declan.doherty@intel.com>
+F: driver/crypto/aesni_mb
 
 Packet processing
 -----------------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 0068b20..a18e817 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index b29d3dd..d9c8c5c 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,13 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS=8
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index f6aecea..d07ee96 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..3bf83d1
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..0c119bf
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,210 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_192_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)
+		(void *key, void *exp_k1, void *k2, void *k3);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;
+		/**< Initialise scheduler  */
+		get_next_job_t get_next;
+		/**< Get next free job structure */
+		submit_job_t submit;
+		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job;
+		/**< Get completed job */
+		flush_job_t flush_job;
+		/**< flush jobs from manager */
+	} job;
+	/**< multi buffer manager functions */
+
+	struct {
+		struct {
+			md5_one_block_t md5;
+			/**< MD5 one block hash */
+			sha1_one_block_t sha1;
+			/**< SHA1 one block hash */
+			sha224_one_block_t sha224;
+			/**< SHA224 one block hash */
+			sha256_one_block_t sha256;
+			/**< SHA256 one block hash */
+			sha384_one_block_t sha384;
+			/**< SHA384 one block hash */
+			sha512_one_block_t sha512;
+			/**< SHA512 one block hash */
+		} one_block;
+		/**< one block hash functions */
+
+		struct {
+			aes_keyexp_128_t aes128;
+			/**< AES128 key expansions */
+			aes_keyexp_192_t aes192;
+			/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;
+			/**< AES256 key expansions */
+
+			aes_xcbc_expand_key_t aes_xcbc;
+			/**< AES XCBC key expansions */
+		} keyexp;
+		/**< Key expansion functions */
+	} aux;
+	/**< Auxiliary functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = {
+				NULL
+			},
+			.aux = {
+				.one_block = {
+					NULL
+				},
+				.keyexp = {
+					NULL
+				}
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+			.job = {
+				init_mb_mgr_avx,
+				get_next_job_avx,
+				submit_job_avx,
+				get_completed_job_avx,
+				flush_job_avx
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx,
+					sha1_one_block_avx,
+					sha224_one_block_avx,
+					sha256_one_block_avx,
+					sha384_one_block_avx,
+					sha512_one_block_avx
+				},
+				.keyexp = {
+					aes_keyexp_128_avx,
+					aes_keyexp_192_avx,
+					aes_keyexp_256_avx,
+					aes_xcbc_expand_key_avx
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+			.job = {
+				init_mb_mgr_avx2,
+				get_next_job_avx2,
+				submit_job_avx2,
+				get_completed_job_avx2,
+				flush_job_avx2
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx2,
+					sha1_one_block_avx2,
+					sha224_one_block_avx2,
+					sha256_one_block_avx2,
+					sha384_one_block_avx2,
+					sha512_one_block_avx2
+				},
+				.keyexp = {
+					aes_keyexp_128_avx2,
+					aes_keyexp_192_avx2,
+					aes_keyexp_256_avx2,
+					aes_xcbc_expand_key_avx2
+				}
+			}
+		}
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..d8ccf05
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,669 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/*
+	 * Multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together
+	 */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess,
+			cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/*
+	 * Multiple buffer library current only support returning a truncated
+	 * digest length as specified in the relevant IPsec RFCs
+	 */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* set status as successful by default */
+	c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+
+	JOB_AES_HMAC *job = NULL;
+
+	int i, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		sess = get_session(qp, &ol->op.crypto);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = process_crypto_op(qp, bufs[i], &ol->op.crypto, sess);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		/* Submit Job */
+		job = (*qp->ops->job.submit)(&qp->mb_mgr);
+
+		/*
+		 * If submit returns a processed job then handle it,
+		 * before submitting subsequent jobs
+		 */
+		if (job)
+			processed_jobs += handle_completed_jobs(qp, job);
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/*
+	 * If we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling
+	 */
+	job = (*qp->ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count += handle_completed_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_queue_pairs = RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS;
+	internals->max_nb_sessions = RTE_AESNI_MB_PMD_MAX_NB_SESSIONS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..96d22f6
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,298 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->max_nb_sessions = internals->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..2f98609
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,229 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+	/**< CPU vector instruction set mode */
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** AESNI Multi buffer queue pair */
+struct aesni_mb_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *ops;
+	/**< Vector mode dependent pointer table of the multi-buffer APIs */
+	MB_MGR mb_mgr;
+	/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1))
+		 */
+	} cipher;
+
+	/** Authentication Parameters */
+	struct {
+		JOB_HASH_ALG algo; /**< Authentication Algorithm */
+		union {
+			struct {
+				uint8_t inner[128] __rte_aligned(16);
+				/**< inner pad */
+				uint8_t outer[128] __rte_aligned(16);
+				/**< outer pad */
+			} pads;
+			/**< HMAC Authentication pads -
+			 * allocating space for the maximum pad
+			 * size supported which is 128 bytes for
+			 * SHA512
+			 */
+
+			struct {
+			    uint32_t k1_expanded[44] __rte_aligned(16);
+			    /**< k1 (expanded key). */
+			    uint8_t k2[16] __rte_aligned(16);
+			    /**< k2. */
+			    uint8_t k3[16] __rte_aligned(16);
+			    /**< k3. */
+			} xcbc;
+			/**< Expanded XCBC authentication keys */
+		};
+	} auth;
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cfcb064..4a660e6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -153,6 +153,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 09/10] app/test: add cryptodev unit and performance tests
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (7 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 10/10] l2fwd-crypto: crypto Declan Doherty
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                          |    2 +
 app/test/Makefile                    |    4 +
 app/test/test.c                      |   92 +-
 app/test/test.h                      |   34 +-
 app/test/test_cryptodev.c            | 1986 ++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h            |   68 ++
 app/test/test_cryptodev_perf.c       | 2062 ++++++++++++++++++++++++++++++++++
 app/test/test_link_bonding.c         |    6 +-
 app/test/test_link_bonding_mode4.c   |    7 +-
 app/test/test_link_bonding_rssconf.c |    7 +-
 10 files changed, 4219 insertions(+), 49 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d5808c..1f72f8c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -204,6 +204,8 @@ Crypto API
 M: Declan Doherty <declan.doherty@intel.com>
 F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
+F: app/test/test_cryptodev.c
+F: app/test/test_cryptodev_perf.c
 
 Drivers
 -------
diff --git a/app/test/Makefile b/app/test/Makefile
index de63235..ec33e1a 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -149,6 +149,10 @@ endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test.c b/app/test/test.c
index b94199a..f35b304 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..fd5b7ec
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1986 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+				"CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type)
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		rte_cryptodev_info_get(dev_id, &info);
+
+		/*
+		 * Since we can't free and re-allocate queue memory always set
+		 * the queues on this device up to max size first so enough
+		 * memory is allocated for any later re-configures needed by
+		 * other tests
+		 */
+
+		ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on "
+					"cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/*
+	 * Now reconfigure queues to size we actually want to use in this
+	 * test suite.
+	 */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/*
+	 * free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice
+	 */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count - 1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	ts_params->conf.session_mp.nb_objs = dev_info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/*
+	 * Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created.
+	 */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights "
+				"%u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2);
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE;		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31,
+	0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E,
+	0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E,
+	0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0,
+	0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57,
+	0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9,
+	0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D,
+	0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46,
+	0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80,
+	0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5,
+	0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2,
+	0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA,
+	0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4,
+	0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4,
+	0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54,
+	0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91,
+	0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF,
+	0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28,
+	0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7,
+	0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6,
+	0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C,
+	0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6,
+	0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6,
+	0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87,
+	0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B,
+	0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53,
+	0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26,
+	0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36,
+	0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E,
+	0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A,
+	0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4,
+	0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1,
+	0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length =
+			DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+			ut_params) == TEST_SUCCESS,
+			"Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)
+		rte_pktmbuf_prepend(ut_params->ibuf,
+				CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600,
+			&stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats)
+			== -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_session **sessions;
+
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	sessions = rte_malloc(NULL, (sizeof(struct rte_cryptodev_session *) *
+			dev_info.max_nb_sessions) + 1, 0);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < dev_info.max_nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(
+				ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i],
+				"Session creation failed at session number %u",
+				i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request "
+				"number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i],
+			"Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < dev_info.max_nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				sessions[i]);
+
+	rte_free(sessions);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_qat_testsuite  = {
+	.suite_name = "Crypto QAT Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_mb_testsuite  = {
+	.suite_name = "Crypto Device AESNI MB Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_qat_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_cmd = {
+	.command = "cryptodev_aesni_mb_autotest",
+	.callback = test_cryptodev_aesni_mb,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..f0cca8b
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,2062 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/*
+	 * Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests
+	 */
+
+	rte_cryptodev_info_get(ts_params->dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50,
+		0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36,
+		0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c,
+		0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6,
+		0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4,
+		0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d,
+		0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d,
+		0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5,
+		0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99,
+		0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6,
+		0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e,
+		0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97,
+		0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa,
+		0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36,
+		0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41,
+		0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7,
+		0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85,
+		0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5,
+		0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6,
+		0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe,
+		0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1,
+		0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e,
+		0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07,
+		0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26,
+		0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80,
+		0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76,
+		0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7,
+		0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf,
+		0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70,
+		0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26,
+		0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30,
+		0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64,
+		0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb,
+		0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c,
+		0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a,
+		0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19,
+		0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23,
+		0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf,
+		0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe,
+		0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d,
+		0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed,
+		0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36,
+		0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7,
+		0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3,
+		0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e,
+		0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49,
+		0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf,
+		0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12,
+		0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76,
+		0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08,
+		0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b,
+		0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9,
+		0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c,
+		0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77,
+		0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb,
+		0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86,
+		0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e,
+		0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b,
+		0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11,
+		0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c,
+		0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69,
+		0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1,
+		0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06,
+		0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e,
+		0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08,
+		0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe,
+		0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f,
+		0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23,
+		0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26,
+		0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c,
+		0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7,
+		0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb,
+		0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2,
+		0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38,
+		0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02,
+		0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28,
+		0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03,
+		0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07,
+		0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61,
+		0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef,
+		0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90,
+		0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0,
+		0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3,
+		0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0,
+		0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0,
+		0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7,
+		0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9,
+		0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d,
+		0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6,
+		0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43,
+		0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1,
+		0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d,
+		0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf,
+		0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b,
+		0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07,
+		0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53,
+		0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35,
+		0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6,
+		0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3,
+		0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40,
+		0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08,
+		0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed,
+		0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7,
+		0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f,
+		0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48,
+		0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68,
+		0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1,
+		0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9,
+		0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72,
+		0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26,
+		0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8,
+		0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25,
+		0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9,
+		0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7,
+		0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7,
+		0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab,
+		0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb,
+		0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54,
+		0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38,
+		0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22,
+		0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde,
+		0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc,
+		0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33,
+		0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13,
+		0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c,
+		0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99,
+		0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48,
+		0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90,
+		0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d,
+		0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d,
+		0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff,
+		0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1,
+		0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f,
+		0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6,
+		0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40,
+		0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48,
+		0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41,
+		0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca,
+		0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07,
+		0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9,
+		0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58,
+		0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39,
+		0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53,
+		0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2,
+		0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2,
+		0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83,
+		0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2,
+		0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54,
+		0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f,
+		0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c,
+		0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f,
+		0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e,
+		0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4,
+		0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf,
+		0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27,
+		0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18,
+		0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e,
+		0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63,
+		0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67,
+		0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda,
+		0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48,
+		0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6,
+		0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d,
+		0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac,
+		0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32,
+		0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7,
+		0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26,
+		0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b,
+		0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2,
+		0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a,
+		0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1,
+		0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24,
+		0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45,
+		0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89,
+		0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82,
+		0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1,
+		0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6,
+		0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3,
+		0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4,
+		0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39,
+		0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58,
+		0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe,
+		0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45,
+		0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5,
+		0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a,
+		0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12,
+		0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c,
+		0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f,
+		0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4,
+		0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04,
+		0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5,
+		0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d,
+		0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab,
+		0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e,
+		0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27,
+		0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f,
+		0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b,
+		0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd,
+		0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3,
+		0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2,
+		0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21,
+		0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff,
+		0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc,
+		0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8,
+		0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb,
+		0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10,
+		0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05,
+		0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23,
+		0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab,
+		0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88,
+		0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83,
+		0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74,
+		0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7,
+		0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18,
+		0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee,
+		0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f,
+		0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97,
+		0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17,
+		0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67,
+		0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b,
+		0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc,
+		0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e,
+		0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2,
+		0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92,
+		0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92,
+		0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b,
+		0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5,
+		0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5,
+		0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a,
+		0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69,
+		0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d,
+		0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14,
+		0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80,
+		0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0,
+		0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76,
+		0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd,
+		0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad,
+		0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61,
+		0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd,
+		0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67,
+		0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35,
+		0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd,
+		0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87,
+		0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda,
+		0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4,
+		0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30,
+		0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c,
+		0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5,
+		0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06,
+		0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f,
+		0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d,
+		0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9,
+		0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62,
+		0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53,
+		0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a,
+		0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0,
+		0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b,
+		0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8,
+		0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a,
+		0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a,
+		0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0,
+		0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9,
+		0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce,
+		0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5,
+		0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09,
+		0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b,
+		0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13,
+		0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24,
+		0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3,
+		0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3,
+		0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f,
+		0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8,
+		0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47,
+		0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d,
+		0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f,
+		0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6,
+		0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e,
+		0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a,
+		0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51,
+		0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e,
+		0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee,
+		0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3,
+		0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99,
+		0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3,
+		0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8,
+		0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf,
+		0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10,
+		0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c,
+		0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c,
+		0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d,
+		0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72,
+		0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0,
+		0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58,
+		0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe,
+		0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13,
+		0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b,
+		0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15,
+		0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85,
+		0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c,
+		0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8,
+		0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77,
+		0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42,
+		0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59,
+		0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32,
+		0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6,
+		0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb,
+		0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48,
+		0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2,
+		0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38,
+		0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f,
+		0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb,
+		0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7,
+		0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06,
+		0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61,
+		0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b,
+		0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29,
+		0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc,
+		0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0,
+		0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37,
+		0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26,
+		0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69,
+		0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba,
+		0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4,
+		0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba,
+		0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f,
+		0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77,
+		0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30,
+		0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70,
+		0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54,
+		0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e,
+		0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e,
+		0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3,
+		0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13,
+		0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda,
+		0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28,
+		0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5,
+		0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98,
+		0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2,
+		0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb,
+		0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83,
+		0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96,
+		0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76,
+		0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b,
+		0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59,
+		0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4,
+		0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9,
+		0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42,
+		0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf,
+		0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1,
+		0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48,
+		0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac,
+		0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11,
+		0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1,
+		0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f,
+		0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5,
+		0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19,
+		0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6,
+		0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19,
+		0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45,
+		0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20,
+		0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d,
+		0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7,
+		0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25,
+		0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42,
+		0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49,
+		0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9,
+		0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8,
+		0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86,
+		0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0,
+		0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc,
+		0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a,
+		0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04,
+		0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c,
+		0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92,
+		0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25,
+		0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11,
+		0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73,
+		0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96,
+		0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67,
+		0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2,
+		0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a,
+		0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e,
+		0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74,
+		0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef,
+		0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e,
+		0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e,
+		0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf,
+		0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3,
+		0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff,
+		0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9,
+		0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8,
+		0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e,
+		0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91,
+		0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f,
+		0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27,
+		0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d,
+		0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3,
+		0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e,
+		0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7,
+		0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f,
+		0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca,
+		0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f,
+		0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19,
+		0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea,
+		0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02,
+		0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04,
+		0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25,
+		0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84,
+		0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9,
+		0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0,
+		0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98,
+		0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51,
+		0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c,
+		0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a,
+		0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a,
+		0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38,
+		0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca,
+		0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b,
+		0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a,
+		0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06,
+		0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3,
+		0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e,
+		0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00,
+		0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b,
+		0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c,
+		0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa,
+		0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d,
+		0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a,
+		0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3,
+		0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58,
+		0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17,
+		0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4,
+		0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b,
+		0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92,
+		0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b,
+		0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98,
+		0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd,
+		0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d,
+		0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9,
+		0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02,
+		0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03,
+		0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07,
+		0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7,
+		0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96,
+		0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68,
+		0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1,
+		0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46,
+		0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37,
+		0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde,
+		0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0,
+		0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e,
+		0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d,
+		0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48,
+		0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43,
+		0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40,
+		0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d,
+		0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9,
+		0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c,
+		0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3,
+		0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34,
+		0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c,
+		0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07,
+		0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c,
+		0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81,
+		0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5,
+		0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde,
+		0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55,
+		0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f,
+		0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff,
+		0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62,
+		0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60,
+		0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4,
+		0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79,
+		0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9,
+		0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32,
+		0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c,
+		0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b,
+		0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a,
+		0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61,
+		0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca,
+		0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae,
+		0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c,
+		0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b,
+		0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e,
+		0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f,
+		0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05,
+		0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9,
+		0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a,
+		0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f,
+		0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34,
+		0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c,
+		0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e,
+		0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b,
+		0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44,
+		0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8,
+		0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b,
+		0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8,
+		0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf,
+		0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4,
+		0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8,
+		0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0,
+		0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59,
+		0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c,
+		0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75,
+		0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd,
+		0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6,
+		0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41,
+		0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9,
+		0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b,
+		0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe,
+		0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1,
+		0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0,
+		0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f,
+		0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf,
+		0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85,
+		0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12,
+		0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82,
+		0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83,
+		0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14,
+		0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c,
+		0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f,
+		0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1,
+		0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d,
+		0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5,
+		0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7,
+		0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85,
+		0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41,
+		0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0,
+		0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca,
+		0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83,
+		0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93,
+		0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6,
+		0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf,
+		0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c,
+		0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73,
+		0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb,
+		0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5,
+		0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24,
+		0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3,
+		0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a,
+		0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8,
+		0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca,
+		0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba,
+		0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a,
+		0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9,
+		0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b,
+		0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b,
+		0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b,
+		0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf,
+		0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69,
+		0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c,
+		0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3,
+		0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15,
+		0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68,
+		0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4,
+		0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2,
+		0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a,
+		0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45,
+		0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8,
+		0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9,
+		0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7,
+		0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34,
+		0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed,
+		0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf,
+		0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5,
+		0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e,
+		0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36,
+		0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40,
+		0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9,
+		0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa,
+		0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3,
+		0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22,
+		0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e,
+		0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89,
+		0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba,
+		0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97,
+		0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4,
+		0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f,
+		0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7,
+		0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6,
+		0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37,
+		0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76,
+		0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02,
+		0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b,
+		0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52,
+		0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f,
+		0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3,
+		0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83,
+		0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+	{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64],
+		{ AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+	{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128],
+		{ AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+	{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256],
+		{ AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+	{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512],
+		{ AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+	{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768],
+		{ AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+	{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024],
+		{ AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+	{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280],
+		{ AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+	{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536],
+		{ AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+	{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792],
+		{ AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+	{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048],
+		{ AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b],
+				data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+				CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC "
+			"algorithm with a constant request size of %u.",
+			data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle "
+			"cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost "
+			"(assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send "
+			"AES128_CBC_SHA256_HMAC requests with a constant burst "
+			"size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\t"
+			"Mrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext,
+					data_params[index].length,
+					0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+					0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0,
+				data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(
+			testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e6714b4..0a3162e 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -586,7 +586,7 @@ test_setup(void)
 	return TEST_SUCCESS;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -600,8 +600,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 static int
@@ -661,7 +659,8 @@ static struct unit_test_suite link_bonding_rssconf_test_suite  = {
 		TEST_CASE_NAMED("test_setup", test_setup_wrapper),
 		TEST_CASE_NAMED("test_rss", test_rss_wrapper),
 		TEST_CASE_NAMED("test_rss_lazy", test_rss_lazy_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END()
 	}
 };
 
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v6 10/10] l2fwd-crypto: crypto
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (8 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-11-10 17:32           ` Declan Doherty
  2015-11-13 16:03             ` Thomas Monjalon
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-10 17:32 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                    |    1 +
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1473 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1524 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 1f72f8c..fa85e55 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -206,6 +206,7 @@ F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
 F: app/test/test_cryptodev.c
 F: app/test/test_cryptodev_perf.c
+F: examples/l2fwd-crypto
 
 Drivers
 -------
diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..10ec513
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1473 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* default period is 10 seconds */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000;
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
-- 
2.4.3

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 02/10] ethdev: make error checking macros public
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 02/10] ethdev: make error checking macros public Declan Doherty
@ 2015-11-10 17:38             ` Adrien Mazarguil
  0 siblings, 0 replies; 115+ messages in thread
From: Adrien Mazarguil @ 2015-11-10 17:38 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

On Tue, Nov 10, 2015 at 05:32:35PM +0000, Declan Doherty wrote:
> Move the function pointer and port id checking macros to rte_ethdev and
> rte_dev header files, so that they can be used in the static inline
> functions there. Also replace the RTE_LOG call within
> RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
@ 2015-11-13 15:35             ` Thomas Monjalon
  2015-11-13 15:41               ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-13 15:35 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-10 17:32, Declan Doherty:
> Adding a new marco for specifing __aligned__ attribute, and updating the

2 typos spotted on this line ;)
I wonder why the "marco" typo is so common.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros
  2015-11-13 15:35             ` Thomas Monjalon
@ 2015-11-13 15:41               ` Declan Doherty
  0 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 15:41 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On 13/11/15 15:35, Thomas Monjalon wrote:
> 2015-11-10 17:32, Declan Doherty:
>> Adding a new marco for specifing __aligned__ attribute, and updating the
>
> 2 typos spotted on this line ;)
> I wonder why the "marco" typo is so common.
>

oops, I didn't have my spell-check plugin enabled in vim :(

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-13 15:44             ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-13 15:44 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-10 17:32, Declan Doherty:
> +DPDK_2.2 {
> +	global:
> +
> +	rte_cryptodevs;
> +	rte_cryptodev_callback_register;
> +	rte_cryptodev_callback_unregister;
> +	rte_cryptodev_close;
> +	rte_cryptodev_count;
> +	rte_cryptodev_count_devtype;
> +	rte_cryptodev_configure;
> +	rte_cryptodev_create_vdev;
> +	rte_cryptodev_enqueue_burst;
> +	rte_cryptodev_dequeue_burst;
> +	rte_cryptodev_get_dev_id;
> +	rte_cryptodev_info_get;
> +	rte_cryptodev_session_create;
> +	rte_cryptodev_session_free;
> +	rte_cryptodev_socket_id;
> +	rte_cryptodev_start;
> +	rte_cryptodev_stats_get;
> +	rte_cryptodev_stats_reset;
> +	rte_cryptodev_stop;
> +	rte_cryptodev_queue_pair_setup;
> +	rte_cryptodev_queue_pair_start;
> +	rte_cryptodev_queue_pair_stop;
> +	rte_cryptodev_queue_pair_count;
> +
> +	rte_cryptodev_pmd_allocate;
> +	rte_cryptodev_pmd_attach;
> +	rte_cryptodev_pmd_callback_process;
> +	rte_cryptodev_pmd_detach;
> +	rte_cryptodev_pmd_driver_register;
> +	rte_cryptodev_pmd_get_dev;
> +	rte_cryptodev_pmd_get_named_dev;
> +	rte_cryptodev_pmd_is_valid_dev;
> +	rte_cryptodev_pmd_release_device;
> +	rte_cryptodev_pmd_socket_id;
> +	rte_cryptodev_pmd_virtual_dev_init;

Why do you split symbold in 2 parts?
Some of them are not implemented, e.g. attach() so should be removed.
Please keep this list in alphabetical order.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-13 15:59             ` Thomas Monjalon
  2015-11-13 16:11             ` Thomas Monjalon
  1 sibling, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-13 15:59 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-10 17:32, Declan Doherty:
> @@ -841,6 +844,9 @@ struct rte_mbuf {
> +
> +       /* Chain of off-load operations to perform on mbuf */
> +       struct rte_mbuf_offload *offload_ops;
>  } __rte_cache_aligned;

Why is there a pointer in the mbuf structure?
Can it be a metadata for the crypto layer instead?

More generally, I have the feeling that the idea behind this new API
is not enough explained. How is it related to the ethernet offloads?
Could you add a doxygen explanation starting with
/**
 * @file

About doxygen, please think to integrate your new header.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-13 16:00             ` Thomas Monjalon
  2015-11-13 16:25               ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-13 16:00 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-10 17:32, Declan Doherty:
> --- a/lib/librte_mbuf_offload/rte_mbuf_offload.h
> +++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
> @@ -123,17 +123,10 @@ rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
>  {
>         struct rte_mbuf_offload *ol = m->offload_ops;
>  
> -       if (m->offload_ops != NULL && m->offload_ops->type == type)
> -               return ol;
> -
> -       ol = m->offload_ops;
> -       while (ol != NULL) {
> +       for (ol = m->offload_ops; ol != NULL; ol = ol->next)
>                 if (ol->type == type)
>                         return ol;
>  
> -               ol = ol->next;
> -       }
> -
>         return ol;
>  }

Strange: why changing the code of the previous patch?

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 10/10] l2fwd-crypto: crypto
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 10/10] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-13 16:03             ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-13 16:03 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-10 17:32, Declan Doherty:
>  MAINTAINERS                    |    1 +
>  examples/l2fwd-crypto/Makefile |   50 ++
>  examples/l2fwd-crypto/main.c   | 1473 ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 1524 insertions(+)

I think you missed examples/Makefile

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
  2015-11-13 15:59             ` Thomas Monjalon
@ 2015-11-13 16:11             ` Thomas Monjalon
  1 sibling, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-13 16:11 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-10 17:32, Declan Doherty:
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev

Why this lib depends on cryptodev?
Shoudn't be the reverse?

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-13 16:00             ` Thomas Monjalon
@ 2015-11-13 16:25               ` Declan Doherty
  0 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 16:25 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On 13/11/15 16:00, Thomas Monjalon wrote:
> 2015-11-10 17:32, Declan Doherty:
>> --- a/lib/librte_mbuf_offload/rte_mbuf_offload.h
>> +++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
>> @@ -123,17 +123,10 @@ rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
>>   {
>>          struct rte_mbuf_offload *ol = m->offload_ops;
>>
>> -       if (m->offload_ops != NULL && m->offload_ops->type == type)
>> -               return ol;
>> -
>> -       ol = m->offload_ops;
>> -       while (ol != NULL) {
>> +       for (ol = m->offload_ops; ol != NULL; ol = ol->next)
>>                  if (ol->type == type)
>>                          return ol;
>>
>> -               ol = ol->next;
>> -       }
>> -
>>          return ol;
>>   }
>
> Strange: why changing the code of the previous patch?
>

I squashed this tidy up into the wrong patch. I'll fix in v7

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework
  2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
                             ` (9 preceding siblings ...)
  2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 10/10] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-13 18:58           ` Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
                               ` (10 more replies)
  10 siblings, 11 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations  support AES128-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
 software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to provide
hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the cryto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto operation
parameters that would be normally set within a session. Once all mutable and
immutable parameters are set the crypto operation data structure can be attached
to the specified mbuf and enqueued on a specified crypto device for processing.

The patch set contains the following features:
- Crypto device APIs and device framework
- Implementation of a software crypto PMD based on multi-buffer library
- Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
- Unit and performance test's which give and example of utilising the crypto API's.
- Sample application which performs crypto operations on the IP payload of the
  packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES256-CBC/AES512-CBC
and HMAC_SHA1/SHA256/SHA512.

v7:
 - Fix typos in commit message of eal: add __rte_packed /__rte_aligned macros patch
 - Include rte_mbuf_offload in doxygen build and updates file comments to clarify lib,
   usage. Also moved clean which was in wrong patch into this rte_mbuf_offload patch.
 - Tidy up map file for cryptodev library.
 - Add l2fwdc-crypto to main examples makefile.
v6:
 - Fix 32-bit build issue caused by casting in new rte_pktmbuf_mtophys_offset macro
 - Fix truncation of log message by new rte_pmd_debug_trace inline function

v5:
 - Making ethdev marcos for function pointer and port id checking public and
   available for use in by the cryptodev. The intialise to patches combine changes
   from original cryptodev patch and discussion in
   http://dpdk.org/ml/archives/dev/2015-November/027871.html
 - Split out changes to create new __rte_packed and __rte_aligned macros 
   into seperate patches form the main cryptodev patch set for clairty
 - further code cleaning, removal of currently unsupported gcm code from
   aesni_mb pmd
v4:
 - Some more EOF whitespace and checkpatch fixes

v3:
 - Fixes a document build error, which I missed in the V2
 - Fixes for remaining checkpatch errors
 - Disables QAT and AESNI_MB PMD being build by default as they have external 
   library dependences 

v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to new rte_mbuf_offload structure
 - General bug fixes and code tidy up

Declan Doherty (10):
  ethdev: rename macros to have RTE_ prefix
  ethdev: make error checking macros public
  eal: add __rte_packed /__rte_aligned macros
  mbuf: add new marcos to get the physical address of data
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 MAINTAINERS                                        |   14 +
 app/test/Makefile                                  |    4 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1986 +++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 2062 ++++++++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 app/test/test_link_bonding_rssconf.c               |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   37 +-
 doc/api/doxy-api-index.md                          |    2 +
 doc/api/doxy-api.conf                              |    2 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   76 +
 doc/guides/cryptodevs/index.rst                    |   43 +
 doc/guides/cryptodevs/qat.rst                      |  194 ++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   63 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  210 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  669 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  298 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  229 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  174 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 +++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  601 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  561 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  124 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  429 ++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  137 ++
 examples/Makefile                                  |    1 +
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1473 ++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  613 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1092 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  649 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  549 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   32 +
 lib/librte_eal/common/include/rte_dev.h            |   53 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |  607 +++---
 lib/librte_ether/rte_ethdev.h                      |   26 +
 lib/librte_mbuf/rte_mbuf.h                         |   29 +
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  302 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 60 files changed, 14842 insertions(+), 383 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-17 14:44               ` Declan Doherty
  2015-11-17 16:04               ` [dpdk-dev] [PATCH v7.1 " Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 02/10] ethdev: make error checking macros public Declan Doherty
                               ` (9 subsequent siblings)
  10 siblings, 2 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

The macros to check that the function pointers and port ids are valid
for an ethdev are potentially useful to have in a common headers for
use with all PMDs. However, since they would then become externally
visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

---
 lib/librte_ether/rte_ethdev.c | 595 +++++++++++++++++++++---------------------
 1 file changed, 298 insertions(+), 297 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index e0e1dca..3bb25e4 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -70,58 +70,59 @@
 #include "rte_ethdev.h"
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define PMD_DEBUG_TRACE(fmt, args...) do {                        \
+#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
 		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
 	} while (0)
 #else
-#define PMD_DEBUG_TRACE(fmt, args...)
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
 /* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define PROC_PRIMARY_OR_RET() do { \
+#define RTE_PROC_PRIMARY_OR_RET() do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define FUNC_PTR_OR_RET(func) do { \
+#define RTE_FUNC_PTR_OR_RET(func) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for valid port */
-#define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval;					\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) {  \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
 } while (0)
 
-#define VALID_PORTID_OR_RET(port_id) do {			\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return;						\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
 } while (0)
 
+
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
@@ -244,7 +245,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 
 	port_id = rte_eth_dev_find_free_port();
 	if (port_id == RTE_MAX_ETHPORTS) {
-		PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
+		RTE_PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
 		return NULL;
 	}
 
@@ -252,7 +253,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 		rte_eth_dev_data_alloc();
 
 	if (rte_eth_dev_allocated(name) != NULL) {
-		PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
+		RTE_PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
 				name);
 		return NULL;
 	}
@@ -339,7 +340,7 @@ rte_eth_dev_init(struct rte_pci_driver *pci_drv,
 	if (diag == 0)
 		return 0;
 
-	PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
+	RTE_PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
 			pci_drv->name,
 			(unsigned) pci_dev->id.vendor_id,
 			(unsigned) pci_dev->id.device_id);
@@ -447,10 +448,10 @@ rte_eth_dev_get_device_type(uint8_t port_id)
 static int
 rte_eth_dev_get_addr_by_port(uint8_t port_id, struct rte_pci_addr *addr)
 {
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -463,10 +464,10 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name)
 {
 	char *tmp;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -483,7 +484,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id)
 	int i;
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -509,7 +510,7 @@ rte_eth_dev_get_port_by_addr(const struct rte_pci_addr *addr, uint8_t *port_id)
 	struct rte_pci_device *pci_dev = NULL;
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -536,7 +537,7 @@ rte_eth_dev_is_detachable(uint8_t port_id)
 	uint32_t dev_flags;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
@@ -735,7 +736,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
 		rxq = dev->data->rx_queues;
 
@@ -766,20 +767,20 @@ rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -796,20 +797,20 @@ rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -826,20 +827,20 @@ rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -856,20 +857,20 @@ rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -895,7 +896,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
 		txq = dev->data->tx_queues;
 
@@ -929,19 +930,19 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of RX queues requested (%u) is greater than max supported(%d)\n",
 			nb_rx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of TX queues requested (%u) is greater than max supported(%d)\n",
 			nb_tx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
@@ -949,11 +950,11 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
@@ -965,22 +966,22 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
 	if (nb_rx_q > dev_info.max_rx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
 				port_id, nb_rx_q, dev_info.max_rx_queues);
 		return -EINVAL;
 	}
 	if (nb_rx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
 				port_id, nb_tx_q, dev_info.max_tx_queues);
 		return -EINVAL;
 	}
 	if (nb_tx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
@@ -993,7 +994,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	if ((dev_conf->intr_conf.lsc == 1) &&
 		(!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) {
-			PMD_DEBUG_TRACE("driver %s does not support lsc\n",
+			RTE_PMD_DEBUG_TRACE("driver %s does not support lsc\n",
 					dev->data->drv_name);
 			return -EINVAL;
 	}
@@ -1005,14 +1006,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (dev_conf->rxmode.jumbo_frame == 1) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" > max valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
 				(unsigned)dev_info.max_rx_pktlen);
 			return -EINVAL;
 		} else if (dev_conf->rxmode.max_rx_pkt_len < ETHER_MIN_LEN) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" < min valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
@@ -1032,14 +1033,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	diag = rte_eth_dev_rx_queue_config(dev, nb_rx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
 				port_id, diag);
 		return diag;
 	}
 
 	diag = rte_eth_dev_tx_queue_config(dev, nb_tx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		return diag;
@@ -1047,7 +1048,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		rte_eth_dev_tx_queue_config(dev, 0);
@@ -1086,7 +1087,7 @@ rte_eth_dev_config_restore(uint8_t port_id)
 			(dev->data->mac_pool_sel[i] & (1ULL << pool)))
 			(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
 		else {
-			PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
 					port_id);
 			/* exit the loop but not return an error */
 			break;
@@ -1114,16 +1115,16 @@ rte_eth_dev_start(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
 
 	if (dev->data->dev_started != 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already started\n",
 			port_id);
 		return 0;
@@ -1138,7 +1139,7 @@ rte_eth_dev_start(uint8_t port_id)
 	rte_eth_dev_config_restore(port_id);
 
 	if (dev->data->dev_conf.intr_conf.lsc == 0) {
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 	return 0;
@@ -1151,15 +1152,15 @@ rte_eth_dev_stop(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
 
 	if (dev->data->dev_started == 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already stopped\n",
 			port_id);
 		return;
@@ -1176,13 +1177,13 @@ rte_eth_dev_set_link_up(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_up)(dev);
 }
 
@@ -1193,13 +1194,13 @@ rte_eth_dev_set_link_down(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_down)(dev);
 }
 
@@ -1210,12 +1211,12 @@ rte_eth_dev_close(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_close)(dev);
 
@@ -1238,24 +1239,24 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
 
 	/*
 	 * Check the size of the mbuf data buffer.
@@ -1264,7 +1265,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	 */
 	rte_eth_dev_info_get(port_id, &dev_info);
 	if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
-		PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
+		RTE_PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
 				mp->name, (int) mp->private_data_size,
 				(int) sizeof(struct rte_pktmbuf_pool_private));
 		return -ENOSPC;
@@ -1272,7 +1273,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	mbp_buf_size = rte_pktmbuf_data_room_size(mp);
 
 	if ((mbp_buf_size - RTE_PKTMBUF_HEADROOM) < dev_info.min_rx_bufsize) {
-		PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
+		RTE_PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
 				"(RTE_PKTMBUF_HEADROOM=%d + min_rx_bufsize(dev)"
 				"=%d)\n",
 				mp->name,
@@ -1288,7 +1289,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
 			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
 
-		PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+		RTE_PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
 			"should be: <= %hu, = %hu, and a product of %hu\n",
 			nb_rx_desc,
 			dev_info.rx_desc_lim.nb_max,
@@ -1321,24 +1322,24 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
 
 	rte_eth_dev_info_get(port_id, &dev_info);
 
@@ -1354,10 +1355,10 @@ rte_eth_promiscuous_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
 	(*dev->dev_ops->promiscuous_enable)(dev);
 	dev->data->promiscuous = 1;
 }
@@ -1367,10 +1368,10 @@ rte_eth_promiscuous_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
 	dev->data->promiscuous = 0;
 	(*dev->dev_ops->promiscuous_disable)(dev);
 }
@@ -1380,7 +1381,7 @@ rte_eth_promiscuous_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->promiscuous;
@@ -1391,10 +1392,10 @@ rte_eth_allmulticast_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
 	(*dev->dev_ops->allmulticast_enable)(dev);
 	dev->data->all_multicast = 1;
 }
@@ -1404,10 +1405,10 @@ rte_eth_allmulticast_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
 	dev->data->all_multicast = 0;
 	(*dev->dev_ops->allmulticast_disable)(dev);
 }
@@ -1417,7 +1418,7 @@ rte_eth_allmulticast_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->all_multicast;
@@ -1442,13 +1443,13 @@ rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 1);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1459,13 +1460,13 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 0);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1476,12 +1477,12 @@ rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	memset(stats, 0, sizeof(*stats));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
 	(*dev->dev_ops->stats_get)(dev, stats);
 	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 	return 0;
@@ -1492,10 +1493,10 @@ rte_eth_stats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
@@ -1510,7 +1511,7 @@ rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstats *xstats,
 	signed xcount = 0;
 	uint64_t val, *stats_ptr;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
@@ -1590,7 +1591,7 @@ rte_eth_xstats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	/* implemented by the driver */
@@ -1609,11 +1610,11 @@ set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
 	return (*dev->dev_ops->queue_stats_mapping_set)
 			(dev, queue_id, stat_idx, is_rx);
 }
@@ -1647,14 +1648,14 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
 		.nb_align = 1,
 	};
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
 	dev_info->rx_desc_lim = lim;
 	dev_info->tx_desc_lim = lim;
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
 	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
 	dev_info->pci_dev = dev->pci_dev;
 	dev_info->driver_name = dev->data->drv_name;
@@ -1665,7 +1666,7 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 	ether_addr_copy(&dev->data->mac_addrs[0], mac_addr);
 }
@@ -1676,7 +1677,7 @@ rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	*mtu = dev->data->mtu;
@@ -1689,9 +1690,9 @@ rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu)
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
 
 	ret = (*dev->dev_ops->mtu_set)(dev, mtu);
 	if (!ret)
@@ -1705,19 +1706,19 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
-		PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
 
 	if (vlan_id > 4095) {
-		PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
+		RTE_PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
 				port_id, (unsigned) vlan_id);
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
 
 	return (*dev->dev_ops->vlan_filter_set)(dev, vlan_id, on);
 }
@@ -1727,14 +1728,14 @@ rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int o
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_strip_queue_set)(dev, rx_queue_id, on);
 
 	return 0;
@@ -1745,9 +1746,9 @@ rte_eth_dev_set_vlan_ether_type(uint8_t port_id, uint16_t tpid)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_tpid_set)(dev, tpid);
 
 	return 0;
@@ -1761,7 +1762,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	int mask = 0;
 	int cur, org = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	/*check which option changed by application*/
@@ -1790,7 +1791,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	if (mask == 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -1802,7 +1803,7 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	struct rte_eth_dev *dev;
 	int ret = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
@@ -1822,9 +1823,9 @@ rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_pvid_set)(dev, pvid, on);
 
 	return 0;
@@ -1835,9 +1836,9 @@ rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
 	memset(fc_conf, 0, sizeof(*fc_conf));
 	return (*dev->dev_ops->flow_ctrl_get)(dev, fc_conf);
 }
@@ -1847,14 +1848,14 @@ rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) {
-		PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
 	return (*dev->dev_ops->flow_ctrl_set)(dev, fc_conf);
 }
 
@@ -1863,9 +1864,9 @@ rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, struct rte_eth_pfc_conf *pfc
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
-		PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
 
@@ -1886,7 +1887,7 @@ rte_eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (reta_size != RTE_ALIGN(reta_size, RTE_RETA_GROUP_SIZE)) {
-		PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
+		RTE_PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
 							RTE_RETA_GROUP_SIZE);
 		return -EINVAL;
 	}
@@ -1911,7 +1912,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (max_rxq == 0) {
-		PMD_DEBUG_TRACE("No receive queue is available\n");
+		RTE_PMD_DEBUG_TRACE("No receive queue is available\n");
 		return -EINVAL;
 	}
 
@@ -1920,7 +1921,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
-			PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
+			RTE_PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
 				"the maximum rxq index: %u\n", idx, shift,
 				reta_conf[idx].reta[shift], max_rxq);
 			return -EINVAL;
@@ -1938,7 +1939,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	struct rte_eth_dev *dev;
 	int ret;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	/* Check mask bits */
 	ret = rte_eth_check_reta_mask(reta_conf, reta_size);
 	if (ret < 0)
@@ -1952,7 +1953,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	if (ret < 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
 	return (*dev->dev_ops->reta_update)(dev, reta_conf, reta_size);
 }
 
@@ -1965,7 +1966,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 	int ret;
 
 	if (port_id >= nb_ports) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
@@ -1975,7 +1976,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 		return ret;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
 	return (*dev->dev_ops->reta_query)(dev, reta_conf, reta_size);
 }
 
@@ -1985,16 +1986,16 @@ rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf)
 	struct rte_eth_dev *dev;
 	uint16_t rss_hash_protos;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	rss_hash_protos = rss_conf->rss_hf;
 	if ((rss_hash_protos != 0) &&
 	    ((rss_hash_protos & ETH_RSS_PROTO_MASK) == 0)) {
-		PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
+		RTE_PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
 				rss_hash_protos);
 		return -EINVAL;
 	}
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_update)(dev, rss_conf);
 }
 
@@ -2004,9 +2005,9 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_conf_get)(dev, rss_conf);
 }
 
@@ -2016,19 +2017,19 @@ rte_eth_dev_udp_tunnel_add(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel);
 }
 
@@ -2038,20 +2039,20 @@ rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel);
 }
 
@@ -2060,9 +2061,9 @@ rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_on)(dev);
 }
 
@@ -2071,9 +2072,9 @@ rte_eth_led_off(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_off)(dev);
 }
 
@@ -2107,17 +2108,17 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	int index;
 	uint64_t pool_mask;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
 
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
 	if (pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
+		RTE_PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -2125,7 +2126,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	if (index < 0) {
 		index = get_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 				port_id);
 			return -ENOSPC;
 		}
@@ -2155,13 +2156,13 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
 	struct rte_eth_dev *dev;
 	int index;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
 
 	index = get_mac_addr_index(port_id, addr);
 	if (index == 0) {
-		PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
 		return -EADDRINUSE;
 	} else if (index < 0)
 		return 0;  /* Do nothing if address wasn't found */
@@ -2183,13 +2184,13 @@ rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (!is_valid_assigned_ether_addr(addr))
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
 
 	/* Update default address in NIC data structure */
 	ether_addr_copy(addr, &dev->data->mac_addrs[0]);
@@ -2207,22 +2208,22 @@ rte_eth_dev_set_vf_rxmode(uint8_t port_id,  uint16_t vf,
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
 		return -EINVAL;
 	}
 
 	if (rx_mode == 0) {
-		PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx_mode)(dev, vf, rx_mode, on);
 }
 
@@ -2257,11 +2258,11 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
@@ -2273,20 +2274,20 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 
 	if (index < 0) {
 		if (!on) {
-			PMD_DEBUG_TRACE("port %d: the MAC address was not "
+			RTE_PMD_DEBUG_TRACE("port %d: the MAC address was not "
 				"set in UTA\n", port_id);
 			return -EINVAL;
 		}
 
 		index = get_hash_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 					port_id);
 			return -ENOSPC;
 		}
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
 	ret = (*dev->dev_ops->uc_hash_table_set)(dev, addr, on);
 	if (ret == 0) {
 		/* Update address in NIC data structure */
@@ -2306,11 +2307,11 @@ rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
 	return (*dev->dev_ops->uc_all_hash_table_set)(dev, on);
 }
 
@@ -2321,18 +2322,18 @@ rte_eth_dev_set_vf_rx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx)(dev, vf, on);
 }
 
@@ -2343,18 +2344,18 @@ rte_eth_dev_set_vf_tx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_tx)(dev, vf, on);
 }
 
@@ -2364,22 +2365,22 @@ rte_eth_dev_set_vf_vlan_filter(uint8_t port_id, uint16_t vlan_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
 	if (vlan_id > ETHER_MAX_VLAN_ID) {
-		PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
 			vlan_id);
 		return -EINVAL;
 	}
 
 	if (vf_mask == 0) {
-		PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_vlan_filter)(dev, vlan_id,
 						   vf_mask, vlan_on);
 }
@@ -2391,26 +2392,26 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx,
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_link link;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (queue_idx > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("set queue rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:port %d: "
 				"invalid queue id=%d\n", port_id, queue_idx);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 			tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_queue_rate_limit)(dev, queue_idx, tx_rate);
 }
 
@@ -2424,26 +2425,26 @@ int rte_eth_set_vf_rate_limit(uint8_t port_id, uint16_t vf, uint16_t tx_rate,
 	if (q_msk == 0)
 		return 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (vf > dev_info.max_vfs) {
-		PMD_DEBUG_TRACE("set VF rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:port %d: "
 				"invalid vf id=%d\n", port_id, vf);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 				tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rate_limit)(dev, vf, tx_rate, q_msk);
 }
 
@@ -2454,14 +2455,14 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (mirror_conf->rule_type == 0) {
-		PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
+		RTE_PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
 				ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
@@ -2469,18 +2470,18 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
 	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
-		PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
-		PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_set)(dev, mirror_conf, rule_id, on);
 }
@@ -2490,10 +2491,10 @@ rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_reset)(dev, rule_id);
 }
@@ -2505,12 +2506,12 @@ rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
@@ -2523,13 +2524,13 @@ rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id],
@@ -2541,10 +2542,10 @@ rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
 	return (*dev->dev_ops->rx_queue_count)(dev, queue_id);
 }
 
@@ -2553,10 +2554,10 @@ rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
 	return (*dev->dev_ops->rx_descriptor_done)(dev->data->rx_queues[queue_id],
 						   offset);
 }
@@ -2573,7 +2574,7 @@ rte_eth_dev_callback_register(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2613,7 +2614,7 @@ rte_eth_dev_callback_unregister(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2676,14 +2677,14 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
@@ -2691,7 +2692,7 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 		vec = intr_handle->intr_vec[qid];
 		rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 		if (rc && rc != -EEXIST) {
-			PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+			RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 					" op %d epfd %d vec %u\n",
 					port_id, qid, op, epfd, vec);
 		}
@@ -2710,26 +2711,26 @@ rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
 		return -EINVAL;
 	}
 
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
 	vec = intr_handle->intr_vec[queue_id];
 	rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 	if (rc && rc != -EEXIST) {
-		PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+		RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 				" op %d epfd %d vec %u\n",
 				port_id, queue_id, op, epfd, vec);
 		return rc;
@@ -2745,13 +2746,13 @@ rte_eth_dev_rx_intr_enable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_enable)(dev, queue_id);
 }
 
@@ -2762,13 +2763,13 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
 }
 
@@ -2777,10 +2778,10 @@ int rte_eth_dev_bypass_init(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
 	(*dev->dev_ops->bypass_init)(dev);
 	return 0;
 }
@@ -2790,10 +2791,10 @@ rte_eth_dev_bypass_state_show(uint8_t port_id, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_show)(dev, state);
 	return 0;
 }
@@ -2803,10 +2804,10 @@ rte_eth_dev_bypass_state_set(uint8_t port_id, uint32_t *new_state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_set)(dev, new_state);
 	return 0;
 }
@@ -2816,10 +2817,10 @@ rte_eth_dev_bypass_event_show(uint8_t port_id, uint32_t event, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_show)(dev, event, state);
 	return 0;
 }
@@ -2829,11 +2830,11 @@ rte_eth_dev_bypass_event_store(uint8_t port_id, uint32_t event, uint32_t state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_set)(dev, event, state);
 	return 0;
 }
@@ -2843,11 +2844,11 @@ rte_eth_dev_wd_timeout_store(uint8_t port_id, uint32_t timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_set)(dev, timeout);
 	return 0;
 }
@@ -2857,11 +2858,11 @@ rte_eth_dev_bypass_ver_show(uint8_t port_id, uint32_t *ver)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_ver_show)(dev, ver);
 	return 0;
 }
@@ -2871,11 +2872,11 @@ rte_eth_dev_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_show)(dev, wd_timeout);
 	return 0;
 }
@@ -2885,11 +2886,11 @@ rte_eth_dev_bypass_wd_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_reset)(dev);
 	return 0;
 }
@@ -2900,10 +2901,10 @@ rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type,
 				RTE_ETH_FILTER_NOP, NULL);
 }
@@ -2914,10 +2915,10 @@ rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type, filter_op, arg);
 }
 
@@ -3087,18 +3088,18 @@ rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
@@ -3111,18 +3112,18 @@ rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
@@ -3136,10 +3137,10 @@ rte_eth_dev_set_mc_addr_list(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
 	return dev->dev_ops->set_mc_addr_list(dev, mc_addr_set, nb_mc_addr);
 }
 
@@ -3148,10 +3149,10 @@ rte_eth_timesync_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_enable)(dev);
 }
 
@@ -3160,10 +3161,10 @@ rte_eth_timesync_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_disable)(dev);
 }
 
@@ -3173,10 +3174,10 @@ rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_rx_timestamp)(dev, timestamp, flags);
 }
 
@@ -3185,10 +3186,10 @@ rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_tx_timestamp)(dev, timestamp);
 }
 
@@ -3197,10 +3198,10 @@ rte_eth_dev_get_reg_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
 	return (*dev->dev_ops->get_reg_length)(dev);
 }
 
@@ -3209,10 +3210,10 @@ rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
 	return (*dev->dev_ops->get_reg)(dev, info);
 }
 
@@ -3221,10 +3222,10 @@ rte_eth_dev_get_eeprom_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom_length)(dev);
 }
 
@@ -3233,10 +3234,10 @@ rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom)(dev, info);
 }
 
@@ -3245,10 +3246,10 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
 
@@ -3259,14 +3260,14 @@ rte_eth_dev_get_dcb_info(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
 	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
 }
 
@@ -3274,7 +3275,7 @@ void
 rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev)
 {
 	if ((eth_dev == NULL) || (pci_dev == NULL)) {
-		PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
+		RTE_PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
 				eth_dev, pci_dev);
 	}
 
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 02/10] ethdev: make error checking macros public
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
                               ` (8 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

Move the function pointer and port id checking macros to rte_ethdev and
rte_dev header files, so that they can be used in the static inline
functions there. Also replace the RTE_LOG call within
RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>

---
 lib/librte_eal/common/include/rte_dev.h | 53 ++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
 lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
 3 files changed, 79 insertions(+), 54 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
index f601d21..f1b5507 100644
--- a/lib/librte_eal/common/include/rte_dev.h
+++ b/lib/librte_eal/common/include/rte_dev.h
@@ -46,8 +46,61 @@
 extern "C" {
 #endif
 
+#include <stdio.h>
 #include <sys/queue.h>
 
+#include <rte_log.h>
+
+__attribute__((format(printf, 2, 0)))
+static inline void
+rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
+{
+	va_list ap;
+
+	va_start(ap, fmt);
+
+	char buffer[vsnprintf(NULL, 0, fmt, ap) + 1];
+
+	va_end(ap);
+
+	va_start(ap, fmt);
+	vsnprintf(buffer, sizeof(buffer), fmt, ap);
+	va_end(ap);
+
+	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
+}
+
+/* Macros for checking for restricting functions to primary instance only */
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return; \
+	} \
+} while (0)
+
+/* Macros to check for invalid function pointers */
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return; \
+	} \
+} while (0)
+
+
 /** Double linked list of device drivers. */
 TAILQ_HEAD(rte_driver_list, rte_driver);
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 3bb25e4..d3c8aba 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -69,60 +69,6 @@
 #include "rte_ether.h"
 #include "rte_ethdev.h"
 
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
-		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
-	} while (0)
-#else
-#define RTE_PMD_DEBUG_TRACE(fmt, args...)
-#endif
-
-/* Macros for checking for restricting functions to primary instance only */
-#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for valid port */
-#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) {  \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval; \
-	} \
-} while (0)
-
-#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) { \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return; \
-	} \
-} while (0)
-
-
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 48a540d..9b07a0b 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -172,6 +172,8 @@ extern "C" {
 
 #include <stdint.h>
 
+#include <rte_dev.h>
+
 /* Use this macro to check if LRO API is supported */
 #define RTE_ETHDEV_HAS_LRO_SUPPORT
 
@@ -931,6 +933,30 @@ struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
 TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+
+/* Macros to check for valid port */
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
+} while (0)
+
 /*
  * Definitions of all functions exported by an Ethernet driver through the
  * the generic structure of type *eth_dev_ops* supplied in the *rte_eth_dev*
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 03/10] eal: add __rte_packed /__rte_aligned macros
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 02/10] ethdev: make error checking macros public Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
                               ` (7 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

Adding a new macro for specifying __aligned__ attribute, and updating the
current __rte_cache_aligned macro to use it.

Also adding a new macro to specify the __packed__ attribute

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 lib/librte_eal/common/include/rte_memory.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..18fd952 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 04/10] mbuf: add new marcos to get the physical address of data
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (2 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-25  0:25               ` Thomas Monjalon
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                               ` (6 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..ef1ee26 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1622,6 +1622,29 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address that points to an offset of the
+ * start of the data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) \
+	(phys_addr_t)((m)->buf_physaddr + (m)->data_off + (o))
+
+/**
+ * A macro that returns the physical address that points to the start of the
+ * data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (3 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-25  0:32               ` Thomas Monjalon
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                               ` (5 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                                    |    4 +
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  613 +++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1092 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  649 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  549 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   32 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 mk/rte.app.mk                                  |    1 +
 14 files changed, 3022 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index c8be5d2..68c6d74 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -196,6 +196,10 @@ M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
 F: scripts/test-null.sh
 
+Crypto API
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_cryptodev
+F: docs/guides/cryptodevs
 
 Drivers
 -------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index fba29e5..8803350 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 7248262..815bea3 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..81fa3fc
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
\ No newline at end of file
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..7cf0439
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,613 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid.
+	 */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both
+	 * keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto
+	 * operation data structures op_params parameter MUST be set for a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;
+	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;
+	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union
+ */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC)
+			  * mode of operation, this field specifies the start
+			  * of the AAD data in the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source
+			  * buffer that the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up
+		 * for the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by
+		 *   the implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth;
+	/**< Additional authentication parameters */
+
+	struct rte_mempool *pool;
+	/**< mempool used to allocate crypto op */
+
+	void *user_data;
+	/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..edd1320
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1092 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;		/**< Callback address */
+	void *cb_arg;				/**< Parameter for callback */
+	enum rte_cryptodev_event_type event;	/**< Interrupt event type */
+	uint32_t active;			/**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if ((strcmp(rte_cryptodev_globals->devs[i].data->name, name)
+				== 0) &&
+				(rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED))
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached ==
+				RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV,
+			rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket(
+						"cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/*
+	 * Register PCI driver for physical device intialisation during
+	 * PCI probing
+	 */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
+		int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_nb_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u",
+							nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+
+		qp = dev->data->queue_pairs;
+
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release,
+				-ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in use */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) +
+			priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if ((dev->data->session_pool->elt_size != elt_size) ||
+				(dev->data->session_pool->cache_size <
+				obj_cache_size) ||
+				(dev->data->session_pool->size < nb_objs)) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different"
+					" initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..e799447
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,649 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+
+	unsigned max_nb_queue_pairs;
+	/**< Maximum number of queues pairs supported by device. */
+	unsigned max_nb_sessions;
+	/**< Maximum number of sessions supported by device. */
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;	/**< Accumulated time processing operation */
+	uint64_t t_min;		/**< Max time */
+	uint64_t t_max;		/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;
+	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;
+	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;
+	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;
+	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf; /**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;
+	/**< Number of queue pairs to configure on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the
+ *				device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;
+	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;
+	/**< Driver for this device */
+	struct rte_cryptodev_data *data;
+	/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;
+	/**< Crypto device type */
+	enum pmd_type pmd_type;
+	/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t socket_id;
+	/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;
+	/**< Session memory pool */
+	void **queue_pairs;
+	/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;
+	/**< Number of device queue pairs. */
+
+	void *dev_private;
+	/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function
+ * until a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..d5fbe44
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,549 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];
+	/**< Device private data */
+	uint8_t nb_devs;		/**< Number of devices found */
+	uint8_t max_devs;		/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	struct rte_cryptodev *dev;
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_cryptodev_globals->devs[i];
+			i < rte_cryptodev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_CRYPTODEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;
+	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;
+	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;
+	/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;
+	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;
+	/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;
+	/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;
+	/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;
+	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;
+	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;
+	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;
+	/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..ff8e93d
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,32 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_virtual_dev_init;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_count;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 724efa7..5d382bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -118,6 +118,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (4 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-20 15:27               ` Olivier MATZ
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                               ` (4 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 MAINTAINERS                                        |   4 +
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 doc/api/doxy-api-index.md                          |   1 +
 doc/api/doxy-api.conf                              |   1 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 302 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 12 files changed, 487 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 68c6d74..73d9578 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -191,6 +191,10 @@ F: lib/librte_mbuf/
 F: doc/guides/prog_guide/mbuf_lib.rst
 F: app/test/test_mbuf.c
 
+Packet buffer offload
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_mbuf_offload/
+
 Ethernet API
 M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 8803350..ba2533a 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -332,6 +332,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 815bea3..4c52f78 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -340,6 +340,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index bdb6130..199cc2c 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -104,6 +104,7 @@ There are many libraries, so their headers may be grouped by topics:
 
 - **containers**:
   [mbuf]               (@ref rte_mbuf.h),
+  [mbuf_offload]       (@ref rte_mbuf_offload.h),
   [ring]               (@ref rte_ring.h),
   [distributor]        (@ref rte_distributor.h),
   [reorder]            (@ref rte_reorder.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 7244b8f..15bba16 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -48,6 +48,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_kvargs \
                           lib/librte_lpm \
                           lib/librte_mbuf \
+                          lib/librte_mbuf_offload \
                           lib/librte_mempool \
                           lib/librte_meter \
                           lib/librte_net \
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ef1ee26..0b6741a 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -728,6 +728,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +844,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..1d9bb2b
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,302 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+/**
+ * @file
+ * RTE mbuf offload
+ *
+ * The rte_mbuf_offload library provides the ability to specify a device generic
+ * off-load operation independent of the current Rx/Tx Ethernet offloads
+ * supported within the rte_mbuf structure, and add supports for multiple
+ * off-load operations and offload device types.
+ *
+ * The rte_mbuf_offload specifies the particular off-load operation type, such
+ * as a crypto operation, and provides a container for the operations
+ * parameter's inside the op union. These parameters are then used by the
+ * device which supports that operation to perform the specified offload.
+ *
+ * This library provides an API to create pre-allocated mempool of offload
+ * operations, with supporting allocate and free functions. It also provides
+ * APIs for attaching an offload to a mbuf, as well as an API to retrieve a
+ * specified offload type from an mbuf offload chain.
+ */
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain
+ */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol;
+
+	for (ol = m->offload_ops; ol != NULL; ol = ol->next)
+		if (ol->type == type)
+			return ol;
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL
+ */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d382bb..2b8ddce 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (5 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-25  1:00               ` Thomas Monjalon
                                 ` (2 more replies)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                               ` (3 subsequent siblings)
  10 siblings, 3 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 194 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 174 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 601 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 561 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 124 +++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 429 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 137 +++++
 mk/rte.app.mk                                      |   3 +
 21 files changed, 3627 insertions(+)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index ba2533a..0068b20 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -155,6 +155,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 4c52f78..b29d3dd 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -153,6 +153,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..9e24c07
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,194 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC** hardware accelerator. QAT PMD has
+current been tested on Fedora 21 64-bit with gcc and on the 4.3 kernel.org
+Linux kernel.
+
+
+Features
+--------
+QAT PMD has support for:
+
+Cipher algorithms:
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
+The VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for "Installation using
+kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see "Installation using the
+01.org QAT driver".
+
+Installation using 01.org QAT driver
+------------------------------------
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+Compiling the 01.org driver - notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:
+
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
+Installation using kernel.org driver
+------------------------------------
+
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+Steps below assume
+  * running DPDK on a platform with one DH895xCC device
+  * on a kernel at least version 4.3
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing:
+    lsmod | grep qat
+
+You should see the following output:
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+
+First find the bdf of the DH895xCC device:
+    lspci -d : 435
+
+You should see output similar to:
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs:
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use "lspci -d:443" to confirm
+the bdf of the 32 VF devices are available per DH895xCC device.
+
+To complete the installation - follow instructions in "Binding VFs to the DPDK UIO"
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command below.
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:03:0${device}.${fn} > /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..f6aecea
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
\ No newline at end of file
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..47f1c91
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,174 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) \
+			= (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..498ee83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..fbf2b83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d4d8e4
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..76c08c0
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..ceaffb7
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,601 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *	  notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *	  notice, this list of conditions and the following disclaimer in
+ *	  the documentation and/or other materials provided with the
+ *	  distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *	  contributors may be used to endorse or promote products derived
+ *	  from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/*
+ * Returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword
+ */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(
+			ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ));
+				memset(out -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ),
+					0, ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	}
+	rte_memcpy(ipad, auth_key, auth_keylen);
+	rte_memcpy(opad, auth_key, auth_keylen);
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/*
+	 * State len is a multiple of 8, so may be larger than the digest.
+	 * Put the partial hash of opad state_len bytes after state1
+	 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(
+			cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert,
+			cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/*
+		 * Write (the length of AAD) into bytes 16-19 of state2
+		 * in big-endian format. This field is 8 bytes
+		 */
+		*(uint32_t *)&(hash->sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1),
+			&state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+					>> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated "
+				"encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..47b257f
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,561 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t shift);
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+			return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+					resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr +
+							queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length =
+			qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr =
+			qat_req->comn_mid.src_data_addr =
+					rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <=
+				sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array,
+				ol->op.crypto.iv.data, ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE *
+				ADF_NUM_BUNDLES_PER_DEV;
+
+		info->max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..d680364
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,124 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*
+ * This macro rounds up a number to a be a multiple of
+ * the alignment when the alignment is a power of 2
+ */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;
+	/* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t
+qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..ec5852d
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2
+/* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		}
+
+		PMD_DRV_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space "
+				"(UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >=
+			(ADF_NUM_SYM_QPS_PER_BUNDLE *
+					ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device",
+				queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+			socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
+			queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n",
+					queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size))
+			!= 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..bbaf1c8
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
\ No newline at end of file
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..e500c1e
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,137 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	struct qat_pmd_private *internals;
+
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_QAT_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b8ddce..cfcb064 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -150,6 +150,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (6 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-25 10:32               ` Thomas Monjalon
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
                               ` (2 subsequent siblings)
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES128_CBC
  - RTE_CRYPTO_CIPHER_AES256_CBC
  - RTE_CRYPTO_CIPHER_AES512_CBC

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto oepration, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there user system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                                        |   3 +
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   7 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  76 +++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  63 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 210 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 669 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 298 +++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 229 +++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 13 files changed, 1571 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 73d9578..2d5808c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,6 +303,9 @@ Null PMD
 M: Tetsuya Mukawa <mukawa@igel.co.jp>
 F: drivers/net/null/
 
+Crypto AES-NI Multi-Buffer PMD
+M: Declan Doherty <declan.doherty@intel.com>
+F: driver/crypto/aesni_mb
 
 Packet processing
 -----------------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 0068b20..a18e817 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -168,6 +168,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index b29d3dd..d9c8c5c 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,6 +166,13 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS=8
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..826b632
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,76 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPKD with the AESNI_MB_PMD the user is required to download the library
+from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
+their user system before building DPDK. The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 1c31697..8949fd0 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,4 +39,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index f6aecea..d07ee96 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..3bf83d1
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..0c119bf
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,210 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_192_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)
+		(void *key, void *exp_k1, void *k2, void *k3);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;
+		/**< Initialise scheduler  */
+		get_next_job_t get_next;
+		/**< Get next free job structure */
+		submit_job_t submit;
+		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job;
+		/**< Get completed job */
+		flush_job_t flush_job;
+		/**< flush jobs from manager */
+	} job;
+	/**< multi buffer manager functions */
+
+	struct {
+		struct {
+			md5_one_block_t md5;
+			/**< MD5 one block hash */
+			sha1_one_block_t sha1;
+			/**< SHA1 one block hash */
+			sha224_one_block_t sha224;
+			/**< SHA224 one block hash */
+			sha256_one_block_t sha256;
+			/**< SHA256 one block hash */
+			sha384_one_block_t sha384;
+			/**< SHA384 one block hash */
+			sha512_one_block_t sha512;
+			/**< SHA512 one block hash */
+		} one_block;
+		/**< one block hash functions */
+
+		struct {
+			aes_keyexp_128_t aes128;
+			/**< AES128 key expansions */
+			aes_keyexp_192_t aes192;
+			/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;
+			/**< AES256 key expansions */
+
+			aes_xcbc_expand_key_t aes_xcbc;
+			/**< AES XCBC key expansions */
+		} keyexp;
+		/**< Key expansion functions */
+	} aux;
+	/**< Auxiliary functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = {
+				NULL
+			},
+			.aux = {
+				.one_block = {
+					NULL
+				},
+				.keyexp = {
+					NULL
+				}
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+			.job = {
+				init_mb_mgr_avx,
+				get_next_job_avx,
+				submit_job_avx,
+				get_completed_job_avx,
+				flush_job_avx
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx,
+					sha1_one_block_avx,
+					sha224_one_block_avx,
+					sha256_one_block_avx,
+					sha384_one_block_avx,
+					sha512_one_block_avx
+				},
+				.keyexp = {
+					aes_keyexp_128_avx,
+					aes_keyexp_192_avx,
+					aes_keyexp_256_avx,
+					aes_xcbc_expand_key_avx
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+			.job = {
+				init_mb_mgr_avx2,
+				get_next_job_avx2,
+				submit_job_avx2,
+				get_completed_job_avx2,
+				flush_job_avx2
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx2,
+					sha1_one_block_avx2,
+					sha224_one_block_avx2,
+					sha256_one_block_avx2,
+					sha384_one_block_avx2,
+					sha512_one_block_avx2
+				},
+				.keyexp = {
+					aes_keyexp_128_avx2,
+					aes_keyexp_192_avx2,
+					aes_keyexp_256_avx2,
+					aes_xcbc_expand_key_avx2
+				}
+			}
+		}
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..d8ccf05
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,669 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/*
+	 * Multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together
+	 */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess,
+			cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/*
+	 * Multiple buffer library current only support returning a truncated
+	 * digest length as specified in the relevant IPsec RFCs
+	 */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* set status as successful by default */
+	c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+
+	JOB_AES_HMAC *job = NULL;
+
+	int i, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		sess = get_session(qp, &ol->op.crypto);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = process_crypto_op(qp, bufs[i], &ol->op.crypto, sess);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		/* Submit Job */
+		job = (*qp->ops->job.submit)(&qp->mb_mgr);
+
+		/*
+		 * If submit returns a processed job then handle it,
+		 * before submitting subsequent jobs
+		 */
+		if (job)
+			processed_jobs += handle_completed_jobs(qp, job);
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/*
+	 * If we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling
+	 */
+	job = (*qp->ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count += handle_completed_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_queue_pairs = RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS;
+	internals->max_nb_sessions = RTE_AESNI_MB_PMD_MAX_NB_SESSIONS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..96d22f6
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,298 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->max_nb_sessions = internals->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..2f98609
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,229 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+	/**< CPU vector instruction set mode */
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** AESNI Multi buffer queue pair */
+struct aesni_mb_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *ops;
+	/**< Vector mode dependent pointer table of the multi-buffer APIs */
+	MB_MGR mb_mgr;
+	/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1))
+		 */
+	} cipher;
+
+	/** Authentication Parameters */
+	struct {
+		JOB_HASH_ALG algo; /**< Authentication Algorithm */
+		union {
+			struct {
+				uint8_t inner[128] __rte_aligned(16);
+				/**< inner pad */
+				uint8_t outer[128] __rte_aligned(16);
+				/**< outer pad */
+			} pads;
+			/**< HMAC Authentication pads -
+			 * allocating space for the maximum pad
+			 * size supported which is 128 bytes for
+			 * SHA512
+			 */
+
+			struct {
+			    uint32_t k1_expanded[44] __rte_aligned(16);
+			    /**< k1 (expanded key). */
+			    uint8_t k2[16] __rte_aligned(16);
+			    /**< k2. */
+			    uint8_t k3[16] __rte_aligned(16);
+			    /**< k3. */
+			} xcbc;
+			/**< Expanded XCBC authentication keys */
+		};
+	} auth;
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cfcb064..4a660e6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -153,6 +153,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 09/10] app/test: add cryptodev unit and performance tests
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (7 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 10/10] l2fwd-crypto: crypto Declan Doherty
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                          |    2 +
 app/test/Makefile                    |    4 +
 app/test/test.c                      |   92 +-
 app/test/test.h                      |   34 +-
 app/test/test_cryptodev.c            | 1986 ++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h            |   68 ++
 app/test/test_cryptodev_perf.c       | 2062 ++++++++++++++++++++++++++++++++++
 app/test/test_link_bonding.c         |    6 +-
 app/test/test_link_bonding_mode4.c   |    7 +-
 app/test/test_link_bonding_rssconf.c |    7 +-
 10 files changed, 4219 insertions(+), 49 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d5808c..1f72f8c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -204,6 +204,8 @@ Crypto API
 M: Declan Doherty <declan.doherty@intel.com>
 F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
+F: app/test/test_cryptodev.c
+F: app/test/test_cryptodev_perf.c
 
 Drivers
 -------
diff --git a/app/test/Makefile b/app/test/Makefile
index de63235..ec33e1a 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -149,6 +149,10 @@ endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test.c b/app/test/test.c
index b94199a..f35b304 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..fd5b7ec
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1986 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+				"CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type)
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		rte_cryptodev_info_get(dev_id, &info);
+
+		/*
+		 * Since we can't free and re-allocate queue memory always set
+		 * the queues on this device up to max size first so enough
+		 * memory is allocated for any later re-configures needed by
+		 * other tests
+		 */
+
+		ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on "
+					"cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/*
+	 * Now reconfigure queues to size we actually want to use in this
+	 * test suite.
+	 */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/*
+	 * free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice
+	 */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count - 1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	ts_params->conf.session_mp.nb_objs = dev_info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/*
+	 * Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created.
+	 */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights "
+				"%u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2);
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE;		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31,
+	0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E,
+	0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E,
+	0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0,
+	0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57,
+	0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9,
+	0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D,
+	0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46,
+	0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80,
+	0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5,
+	0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2,
+	0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA,
+	0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4,
+	0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4,
+	0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54,
+	0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91,
+	0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF,
+	0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28,
+	0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7,
+	0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6,
+	0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C,
+	0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6,
+	0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6,
+	0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87,
+	0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B,
+	0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53,
+	0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26,
+	0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36,
+	0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E,
+	0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A,
+	0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4,
+	0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1,
+	0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length =
+			DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+			ut_params) == TEST_SUCCESS,
+			"Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)
+		rte_pktmbuf_prepend(ut_params->ibuf,
+				CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600,
+			&stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats)
+			== -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_session **sessions;
+
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	sessions = rte_malloc(NULL, (sizeof(struct rte_cryptodev_session *) *
+			dev_info.max_nb_sessions) + 1, 0);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < dev_info.max_nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(
+				ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i],
+				"Session creation failed at session number %u",
+				i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request "
+				"number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i],
+			"Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < dev_info.max_nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				sessions[i]);
+
+	rte_free(sessions);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_qat_testsuite  = {
+	.suite_name = "Crypto QAT Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_mb_testsuite  = {
+	.suite_name = "Crypto Device AESNI MB Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_qat_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_cmd = {
+	.command = "cryptodev_aesni_mb_autotest",
+	.callback = test_cryptodev_aesni_mb,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..f0cca8b
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,2062 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/*
+	 * Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests
+	 */
+
+	rte_cryptodev_info_get(ts_params->dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50,
+		0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36,
+		0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c,
+		0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6,
+		0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4,
+		0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d,
+		0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d,
+		0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5,
+		0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99,
+		0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6,
+		0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e,
+		0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97,
+		0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa,
+		0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36,
+		0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41,
+		0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7,
+		0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85,
+		0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5,
+		0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6,
+		0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe,
+		0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1,
+		0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e,
+		0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07,
+		0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26,
+		0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80,
+		0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76,
+		0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7,
+		0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf,
+		0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70,
+		0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26,
+		0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30,
+		0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64,
+		0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb,
+		0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c,
+		0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a,
+		0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19,
+		0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23,
+		0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf,
+		0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe,
+		0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d,
+		0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed,
+		0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36,
+		0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7,
+		0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3,
+		0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e,
+		0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49,
+		0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf,
+		0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12,
+		0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76,
+		0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08,
+		0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b,
+		0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9,
+		0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c,
+		0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77,
+		0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb,
+		0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86,
+		0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e,
+		0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b,
+		0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11,
+		0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c,
+		0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69,
+		0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1,
+		0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06,
+		0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e,
+		0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08,
+		0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe,
+		0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f,
+		0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23,
+		0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26,
+		0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c,
+		0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7,
+		0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb,
+		0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2,
+		0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38,
+		0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02,
+		0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28,
+		0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03,
+		0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07,
+		0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61,
+		0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef,
+		0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90,
+		0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0,
+		0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3,
+		0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0,
+		0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0,
+		0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7,
+		0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9,
+		0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d,
+		0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6,
+		0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43,
+		0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1,
+		0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d,
+		0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf,
+		0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b,
+		0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07,
+		0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53,
+		0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35,
+		0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6,
+		0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3,
+		0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40,
+		0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08,
+		0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed,
+		0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7,
+		0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f,
+		0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48,
+		0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68,
+		0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1,
+		0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9,
+		0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72,
+		0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26,
+		0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8,
+		0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25,
+		0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9,
+		0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7,
+		0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7,
+		0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab,
+		0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb,
+		0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54,
+		0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38,
+		0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22,
+		0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde,
+		0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc,
+		0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33,
+		0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13,
+		0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c,
+		0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99,
+		0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48,
+		0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90,
+		0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d,
+		0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d,
+		0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff,
+		0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1,
+		0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f,
+		0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6,
+		0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40,
+		0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48,
+		0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41,
+		0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca,
+		0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07,
+		0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9,
+		0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58,
+		0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39,
+		0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53,
+		0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2,
+		0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2,
+		0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83,
+		0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2,
+		0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54,
+		0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f,
+		0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c,
+		0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f,
+		0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e,
+		0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4,
+		0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf,
+		0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27,
+		0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18,
+		0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e,
+		0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63,
+		0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67,
+		0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda,
+		0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48,
+		0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6,
+		0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d,
+		0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac,
+		0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32,
+		0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7,
+		0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26,
+		0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b,
+		0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2,
+		0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a,
+		0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1,
+		0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24,
+		0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45,
+		0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89,
+		0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82,
+		0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1,
+		0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6,
+		0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3,
+		0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4,
+		0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39,
+		0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58,
+		0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe,
+		0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45,
+		0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5,
+		0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a,
+		0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12,
+		0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c,
+		0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f,
+		0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4,
+		0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04,
+		0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5,
+		0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d,
+		0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab,
+		0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e,
+		0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27,
+		0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f,
+		0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b,
+		0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd,
+		0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3,
+		0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2,
+		0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21,
+		0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff,
+		0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc,
+		0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8,
+		0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb,
+		0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10,
+		0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05,
+		0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23,
+		0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab,
+		0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88,
+		0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83,
+		0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74,
+		0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7,
+		0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18,
+		0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee,
+		0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f,
+		0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97,
+		0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17,
+		0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67,
+		0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b,
+		0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc,
+		0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e,
+		0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2,
+		0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92,
+		0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92,
+		0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b,
+		0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5,
+		0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5,
+		0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a,
+		0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69,
+		0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d,
+		0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14,
+		0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80,
+		0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0,
+		0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76,
+		0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd,
+		0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad,
+		0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61,
+		0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd,
+		0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67,
+		0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35,
+		0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd,
+		0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87,
+		0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda,
+		0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4,
+		0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30,
+		0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c,
+		0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5,
+		0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06,
+		0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f,
+		0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d,
+		0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9,
+		0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62,
+		0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53,
+		0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a,
+		0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0,
+		0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b,
+		0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8,
+		0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a,
+		0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a,
+		0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0,
+		0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9,
+		0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce,
+		0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5,
+		0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09,
+		0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b,
+		0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13,
+		0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24,
+		0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3,
+		0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3,
+		0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f,
+		0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8,
+		0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47,
+		0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d,
+		0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f,
+		0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6,
+		0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e,
+		0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a,
+		0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51,
+		0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e,
+		0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee,
+		0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3,
+		0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99,
+		0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3,
+		0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8,
+		0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf,
+		0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10,
+		0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c,
+		0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c,
+		0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d,
+		0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72,
+		0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0,
+		0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58,
+		0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe,
+		0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13,
+		0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b,
+		0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15,
+		0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85,
+		0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c,
+		0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8,
+		0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77,
+		0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42,
+		0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59,
+		0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32,
+		0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6,
+		0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb,
+		0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48,
+		0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2,
+		0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38,
+		0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f,
+		0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb,
+		0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7,
+		0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06,
+		0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61,
+		0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b,
+		0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29,
+		0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc,
+		0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0,
+		0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37,
+		0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26,
+		0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69,
+		0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba,
+		0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4,
+		0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba,
+		0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f,
+		0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77,
+		0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30,
+		0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70,
+		0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54,
+		0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e,
+		0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e,
+		0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3,
+		0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13,
+		0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda,
+		0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28,
+		0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5,
+		0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98,
+		0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2,
+		0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb,
+		0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83,
+		0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96,
+		0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76,
+		0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b,
+		0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59,
+		0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4,
+		0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9,
+		0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42,
+		0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf,
+		0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1,
+		0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48,
+		0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac,
+		0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11,
+		0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1,
+		0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f,
+		0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5,
+		0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19,
+		0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6,
+		0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19,
+		0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45,
+		0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20,
+		0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d,
+		0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7,
+		0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25,
+		0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42,
+		0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49,
+		0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9,
+		0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8,
+		0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86,
+		0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0,
+		0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc,
+		0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a,
+		0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04,
+		0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c,
+		0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92,
+		0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25,
+		0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11,
+		0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73,
+		0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96,
+		0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67,
+		0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2,
+		0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a,
+		0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e,
+		0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74,
+		0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef,
+		0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e,
+		0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e,
+		0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf,
+		0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3,
+		0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff,
+		0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9,
+		0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8,
+		0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e,
+		0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91,
+		0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f,
+		0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27,
+		0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d,
+		0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3,
+		0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e,
+		0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7,
+		0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f,
+		0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca,
+		0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f,
+		0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19,
+		0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea,
+		0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02,
+		0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04,
+		0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25,
+		0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84,
+		0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9,
+		0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0,
+		0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98,
+		0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51,
+		0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c,
+		0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a,
+		0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a,
+		0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38,
+		0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca,
+		0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b,
+		0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a,
+		0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06,
+		0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3,
+		0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e,
+		0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00,
+		0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b,
+		0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c,
+		0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa,
+		0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d,
+		0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a,
+		0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3,
+		0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58,
+		0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17,
+		0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4,
+		0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b,
+		0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92,
+		0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b,
+		0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98,
+		0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd,
+		0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d,
+		0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9,
+		0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02,
+		0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03,
+		0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07,
+		0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7,
+		0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96,
+		0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68,
+		0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1,
+		0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46,
+		0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37,
+		0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde,
+		0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0,
+		0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e,
+		0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d,
+		0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48,
+		0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43,
+		0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40,
+		0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d,
+		0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9,
+		0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c,
+		0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3,
+		0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34,
+		0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c,
+		0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07,
+		0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c,
+		0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81,
+		0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5,
+		0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde,
+		0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55,
+		0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f,
+		0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff,
+		0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62,
+		0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60,
+		0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4,
+		0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79,
+		0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9,
+		0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32,
+		0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c,
+		0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b,
+		0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a,
+		0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61,
+		0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca,
+		0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae,
+		0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c,
+		0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b,
+		0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e,
+		0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f,
+		0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05,
+		0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9,
+		0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a,
+		0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f,
+		0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34,
+		0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c,
+		0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e,
+		0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b,
+		0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44,
+		0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8,
+		0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b,
+		0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8,
+		0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf,
+		0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4,
+		0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8,
+		0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0,
+		0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59,
+		0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c,
+		0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75,
+		0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd,
+		0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6,
+		0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41,
+		0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9,
+		0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b,
+		0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe,
+		0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1,
+		0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0,
+		0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f,
+		0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf,
+		0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85,
+		0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12,
+		0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82,
+		0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83,
+		0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14,
+		0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c,
+		0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f,
+		0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1,
+		0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d,
+		0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5,
+		0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7,
+		0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85,
+		0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41,
+		0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0,
+		0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca,
+		0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83,
+		0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93,
+		0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6,
+		0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf,
+		0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c,
+		0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73,
+		0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb,
+		0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5,
+		0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24,
+		0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3,
+		0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a,
+		0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8,
+		0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca,
+		0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba,
+		0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a,
+		0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9,
+		0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b,
+		0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b,
+		0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b,
+		0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf,
+		0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69,
+		0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c,
+		0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3,
+		0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15,
+		0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68,
+		0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4,
+		0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2,
+		0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a,
+		0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45,
+		0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8,
+		0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9,
+		0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7,
+		0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34,
+		0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed,
+		0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf,
+		0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5,
+		0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e,
+		0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36,
+		0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40,
+		0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9,
+		0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa,
+		0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3,
+		0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22,
+		0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e,
+		0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89,
+		0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba,
+		0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97,
+		0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4,
+		0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f,
+		0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7,
+		0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6,
+		0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37,
+		0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76,
+		0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02,
+		0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b,
+		0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52,
+		0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f,
+		0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3,
+		0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83,
+		0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+	{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64],
+		{ AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+	{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128],
+		{ AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+	{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256],
+		{ AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+	{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512],
+		{ AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+	{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768],
+		{ AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+	{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024],
+		{ AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+	{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280],
+		{ AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+	{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536],
+		{ AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+	{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792],
+		{ AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+	{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048],
+		{ AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b],
+				data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+				CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC "
+			"algorithm with a constant request size of %u.",
+			data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle "
+			"cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost "
+			"(assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send "
+			"AES128_CBC_SHA256_HMAC requests with a constant burst "
+			"size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\t"
+			"Mrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext,
+					data_params[index].length,
+					0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+					0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0,
+				data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(
+			testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e6714b4..0a3162e 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -586,7 +586,7 @@ test_setup(void)
 	return TEST_SUCCESS;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -600,8 +600,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 static int
@@ -661,7 +659,8 @@ static struct unit_test_suite link_bonding_rssconf_test_suite  = {
 		TEST_CASE_NAMED("test_setup", test_setup_wrapper),
 		TEST_CASE_NAMED("test_rss", test_rss_wrapper),
 		TEST_CASE_NAMED("test_rss_lazy", test_rss_lazy_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END()
 	}
 };
 
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7 10/10] l2fwd-crypto: crypto
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (8 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-11-13 18:58             ` Declan Doherty
  2015-11-25  1:03               ` Thomas Monjalon
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
  10 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-13 18:58 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                    |    1 +
 examples/Makefile              |    1 +
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1473 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 1525 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 1f72f8c..fa85e55 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -206,6 +206,7 @@ F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
 F: app/test/test_cryptodev.c
 F: app/test/test_cryptodev_perf.c
+F: examples/l2fwd-crypto
 
 Drivers
 -------
diff --git a/examples/Makefile b/examples/Makefile
index b4eddbd..4bb6f57 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -74,5 +74,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_XEN_DOM0) += vhost_xen
 DIRS-y += vmdq
 DIRS-y += vmdq_dcb
 DIRS-$(CONFIG_RTE_LIBRTE_POWER) += vm_power_manager
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += l2fwd-crypto
 
 include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..10ec513
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1473 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* default period is 10 seconds */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000;
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %lu -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
@ 2015-11-17 14:44               ` Declan Doherty
  2015-11-17 15:39                 ` Thomas Monjalon
  2015-11-17 16:04               ` [dpdk-dev] [PATCH v7.1 " Declan Doherty
  1 sibling, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-17 14:44 UTC (permalink / raw)
  To: dev, Thomas Monjalon

On 13/11/15 18:58, Declan Doherty wrote:
> The macros to check that the function pointers and port ids are valid
> for an ethdev are potentially useful to have in a common headers for
> use with all PMDs. However, since they would then become externally
> visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
> ---
>   lib/librte_ether/rte_ethdev.c | 595 +++++++++++++++++++++---------------------
>   1 file changed, 298 insertions(+), 297 deletions(-)
> <snip>
>

Hey Thomas,

this patch needs to be re-based due to the committal of Daniel's patch 
"ethdev: add ieee1588 functions for device clock time" is it ok to just 
send an updated patch for this single patch as it doesn't effect the 
other 9 patches in the series?

Thanks
Declan

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-17 14:44               ` Declan Doherty
@ 2015-11-17 15:39                 ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-17 15:39 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-17 14:44, Declan Doherty:
> Hey Thomas,
> 
> this patch needs to be re-based due to the committal of Daniel's patch 
> "ethdev: add ieee1588 functions for device clock time" is it ok to just 
> send an updated patch for this single patch as it doesn't effect the 
> other 9 patches in the series?

Yes, please use --in-reply-to to thread below v7 01/10 and change its state
in patchwork.
Thanks

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v7.1 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
  2015-11-17 14:44               ` Declan Doherty
@ 2015-11-17 16:04               ` Declan Doherty
  1 sibling, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-17 16:04 UTC (permalink / raw)
  To: dev

The macros to check that the function pointers and port ids are valid
for an ethdev are potentially useful to have in a common headers for
use with all PMDs. However, since they would then become externally
visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

---
 lib/librte_ether/rte_ethdev.c | 607 +++++++++++++++++++++---------------------
 1 file changed, 304 insertions(+), 303 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b19ac9a..71775dc 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -70,58 +70,59 @@
 #include "rte_ethdev.h"
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define PMD_DEBUG_TRACE(fmt, args...) do {                        \
+#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
 		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
 	} while (0)
 #else
-#define PMD_DEBUG_TRACE(fmt, args...)
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
 /* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define PROC_PRIMARY_OR_RET() do { \
+#define RTE_PROC_PRIMARY_OR_RET() do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define FUNC_PTR_OR_RET(func) do { \
+#define RTE_FUNC_PTR_OR_RET(func) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for valid port */
-#define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval;					\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) {  \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
 } while (0)
 
-#define VALID_PORTID_OR_RET(port_id) do {			\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return;						\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
 } while (0)
 
+
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
@@ -244,7 +245,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 
 	port_id = rte_eth_dev_find_free_port();
 	if (port_id == RTE_MAX_ETHPORTS) {
-		PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
+		RTE_PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
 		return NULL;
 	}
 
@@ -252,7 +253,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 		rte_eth_dev_data_alloc();
 
 	if (rte_eth_dev_allocated(name) != NULL) {
-		PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
+		RTE_PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
 				name);
 		return NULL;
 	}
@@ -339,7 +340,7 @@ rte_eth_dev_init(struct rte_pci_driver *pci_drv,
 	if (diag == 0)
 		return 0;
 
-	PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
+	RTE_PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
 			pci_drv->name,
 			(unsigned) pci_dev->id.vendor_id,
 			(unsigned) pci_dev->id.device_id);
@@ -447,10 +448,10 @@ rte_eth_dev_get_device_type(uint8_t port_id)
 static int
 rte_eth_dev_get_addr_by_port(uint8_t port_id, struct rte_pci_addr *addr)
 {
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -463,10 +464,10 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name)
 {
 	char *tmp;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -483,7 +484,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id)
 	int i;
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -509,7 +510,7 @@ rte_eth_dev_get_port_by_addr(const struct rte_pci_addr *addr, uint8_t *port_id)
 	struct rte_pci_device *pci_dev = NULL;
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -536,7 +537,7 @@ rte_eth_dev_is_detachable(uint8_t port_id)
 	uint32_t dev_flags;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
@@ -735,7 +736,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
 		rxq = dev->data->rx_queues;
 
@@ -766,20 +767,20 @@ rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -796,20 +797,20 @@ rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -826,20 +827,20 @@ rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -856,20 +857,20 @@ rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -895,7 +896,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
 		txq = dev->data->tx_queues;
 
@@ -929,19 +930,19 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of RX queues requested (%u) is greater than max supported(%d)\n",
 			nb_rx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of TX queues requested (%u) is greater than max supported(%d)\n",
 			nb_tx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
@@ -949,11 +950,11 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
@@ -965,22 +966,22 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
 	if (nb_rx_q > dev_info.max_rx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
 				port_id, nb_rx_q, dev_info.max_rx_queues);
 		return -EINVAL;
 	}
 	if (nb_rx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
 				port_id, nb_tx_q, dev_info.max_tx_queues);
 		return -EINVAL;
 	}
 	if (nb_tx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
@@ -993,7 +994,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	if ((dev_conf->intr_conf.lsc == 1) &&
 		(!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) {
-			PMD_DEBUG_TRACE("driver %s does not support lsc\n",
+			RTE_PMD_DEBUG_TRACE("driver %s does not support lsc\n",
 					dev->data->drv_name);
 			return -EINVAL;
 	}
@@ -1005,14 +1006,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (dev_conf->rxmode.jumbo_frame == 1) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" > max valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
 				(unsigned)dev_info.max_rx_pktlen);
 			return -EINVAL;
 		} else if (dev_conf->rxmode.max_rx_pkt_len < ETHER_MIN_LEN) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" < min valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
@@ -1032,14 +1033,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	diag = rte_eth_dev_rx_queue_config(dev, nb_rx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
 				port_id, diag);
 		return diag;
 	}
 
 	diag = rte_eth_dev_tx_queue_config(dev, nb_tx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		return diag;
@@ -1047,7 +1048,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		rte_eth_dev_tx_queue_config(dev, 0);
@@ -1086,7 +1087,7 @@ rte_eth_dev_config_restore(uint8_t port_id)
 			(dev->data->mac_pool_sel[i] & (1ULL << pool)))
 			(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
 		else {
-			PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
 					port_id);
 			/* exit the loop but not return an error */
 			break;
@@ -1114,16 +1115,16 @@ rte_eth_dev_start(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
 
 	if (dev->data->dev_started != 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already started\n",
 			port_id);
 		return 0;
@@ -1138,7 +1139,7 @@ rte_eth_dev_start(uint8_t port_id)
 	rte_eth_dev_config_restore(port_id);
 
 	if (dev->data->dev_conf.intr_conf.lsc == 0) {
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 	return 0;
@@ -1151,15 +1152,15 @@ rte_eth_dev_stop(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
 
 	if (dev->data->dev_started == 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already stopped\n",
 			port_id);
 		return;
@@ -1176,13 +1177,13 @@ rte_eth_dev_set_link_up(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_up)(dev);
 }
 
@@ -1193,13 +1194,13 @@ rte_eth_dev_set_link_down(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_down)(dev);
 }
 
@@ -1210,12 +1211,12 @@ rte_eth_dev_close(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_close)(dev);
 
@@ -1238,24 +1239,24 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
 
 	/*
 	 * Check the size of the mbuf data buffer.
@@ -1264,7 +1265,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	 */
 	rte_eth_dev_info_get(port_id, &dev_info);
 	if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
-		PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
+		RTE_PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
 				mp->name, (int) mp->private_data_size,
 				(int) sizeof(struct rte_pktmbuf_pool_private));
 		return -ENOSPC;
@@ -1272,7 +1273,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	mbp_buf_size = rte_pktmbuf_data_room_size(mp);
 
 	if ((mbp_buf_size - RTE_PKTMBUF_HEADROOM) < dev_info.min_rx_bufsize) {
-		PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
+		RTE_PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
 				"(RTE_PKTMBUF_HEADROOM=%d + min_rx_bufsize(dev)"
 				"=%d)\n",
 				mp->name,
@@ -1288,7 +1289,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
 			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
 
-		PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+		RTE_PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
 			"should be: <= %hu, = %hu, and a product of %hu\n",
 			nb_rx_desc,
 			dev_info.rx_desc_lim.nb_max,
@@ -1321,24 +1322,24 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
 
 	rte_eth_dev_info_get(port_id, &dev_info);
 
@@ -1354,10 +1355,10 @@ rte_eth_promiscuous_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
 	(*dev->dev_ops->promiscuous_enable)(dev);
 	dev->data->promiscuous = 1;
 }
@@ -1367,10 +1368,10 @@ rte_eth_promiscuous_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
 	dev->data->promiscuous = 0;
 	(*dev->dev_ops->promiscuous_disable)(dev);
 }
@@ -1380,7 +1381,7 @@ rte_eth_promiscuous_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->promiscuous;
@@ -1391,10 +1392,10 @@ rte_eth_allmulticast_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
 	(*dev->dev_ops->allmulticast_enable)(dev);
 	dev->data->all_multicast = 1;
 }
@@ -1404,10 +1405,10 @@ rte_eth_allmulticast_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
 	dev->data->all_multicast = 0;
 	(*dev->dev_ops->allmulticast_disable)(dev);
 }
@@ -1417,7 +1418,7 @@ rte_eth_allmulticast_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->all_multicast;
@@ -1442,13 +1443,13 @@ rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 1);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1459,13 +1460,13 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 0);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1476,12 +1477,12 @@ rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	memset(stats, 0, sizeof(*stats));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
 	(*dev->dev_ops->stats_get)(dev, stats);
 	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 	return 0;
@@ -1492,10 +1493,10 @@ rte_eth_stats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
@@ -1510,7 +1511,7 @@ rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstats *xstats,
 	signed xcount = 0;
 	uint64_t val, *stats_ptr;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
@@ -1584,7 +1585,7 @@ rte_eth_xstats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	/* implemented by the driver */
@@ -1603,11 +1604,11 @@ set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
 	return (*dev->dev_ops->queue_stats_mapping_set)
 			(dev, queue_id, stat_idx, is_rx);
 }
@@ -1641,14 +1642,14 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
 		.nb_align = 1,
 	};
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
 	dev_info->rx_desc_lim = lim;
 	dev_info->tx_desc_lim = lim;
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
 	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
 	dev_info->pci_dev = dev->pci_dev;
 	dev_info->driver_name = dev->data->drv_name;
@@ -1659,7 +1660,7 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 	ether_addr_copy(&dev->data->mac_addrs[0], mac_addr);
 }
@@ -1670,7 +1671,7 @@ rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	*mtu = dev->data->mtu;
@@ -1683,9 +1684,9 @@ rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu)
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
 
 	ret = (*dev->dev_ops->mtu_set)(dev, mtu);
 	if (!ret)
@@ -1699,19 +1700,19 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
-		PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
 
 	if (vlan_id > 4095) {
-		PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
+		RTE_PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
 				port_id, (unsigned) vlan_id);
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
 
 	return (*dev->dev_ops->vlan_filter_set)(dev, vlan_id, on);
 }
@@ -1721,14 +1722,14 @@ rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int o
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_strip_queue_set)(dev, rx_queue_id, on);
 
 	return 0;
@@ -1739,9 +1740,9 @@ rte_eth_dev_set_vlan_ether_type(uint8_t port_id, uint16_t tpid)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_tpid_set)(dev, tpid);
 
 	return 0;
@@ -1755,7 +1756,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	int mask = 0;
 	int cur, org = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	/*check which option changed by application*/
@@ -1784,7 +1785,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	if (mask == 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -1796,7 +1797,7 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	struct rte_eth_dev *dev;
 	int ret = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
@@ -1816,9 +1817,9 @@ rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_pvid_set)(dev, pvid, on);
 
 	return 0;
@@ -1829,9 +1830,9 @@ rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
 	memset(fc_conf, 0, sizeof(*fc_conf));
 	return (*dev->dev_ops->flow_ctrl_get)(dev, fc_conf);
 }
@@ -1841,14 +1842,14 @@ rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) {
-		PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
 	return (*dev->dev_ops->flow_ctrl_set)(dev, fc_conf);
 }
 
@@ -1857,9 +1858,9 @@ rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, struct rte_eth_pfc_conf *pfc
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
-		PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
 
@@ -1880,7 +1881,7 @@ rte_eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (reta_size != RTE_ALIGN(reta_size, RTE_RETA_GROUP_SIZE)) {
-		PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
+		RTE_PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
 							RTE_RETA_GROUP_SIZE);
 		return -EINVAL;
 	}
@@ -1905,7 +1906,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (max_rxq == 0) {
-		PMD_DEBUG_TRACE("No receive queue is available\n");
+		RTE_PMD_DEBUG_TRACE("No receive queue is available\n");
 		return -EINVAL;
 	}
 
@@ -1914,7 +1915,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
-			PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
+			RTE_PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
 				"the maximum rxq index: %u\n", idx, shift,
 				reta_conf[idx].reta[shift], max_rxq);
 			return -EINVAL;
@@ -1932,7 +1933,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	struct rte_eth_dev *dev;
 	int ret;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	/* Check mask bits */
 	ret = rte_eth_check_reta_mask(reta_conf, reta_size);
 	if (ret < 0)
@@ -1946,7 +1947,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	if (ret < 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
 	return (*dev->dev_ops->reta_update)(dev, reta_conf, reta_size);
 }
 
@@ -1959,7 +1960,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 	int ret;
 
 	if (port_id >= nb_ports) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
@@ -1969,7 +1970,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 		return ret;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
 	return (*dev->dev_ops->reta_query)(dev, reta_conf, reta_size);
 }
 
@@ -1979,16 +1980,16 @@ rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf)
 	struct rte_eth_dev *dev;
 	uint16_t rss_hash_protos;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	rss_hash_protos = rss_conf->rss_hf;
 	if ((rss_hash_protos != 0) &&
 	    ((rss_hash_protos & ETH_RSS_PROTO_MASK) == 0)) {
-		PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
+		RTE_PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
 				rss_hash_protos);
 		return -EINVAL;
 	}
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_update)(dev, rss_conf);
 }
 
@@ -1998,9 +1999,9 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_conf_get)(dev, rss_conf);
 }
 
@@ -2010,19 +2011,19 @@ rte_eth_dev_udp_tunnel_add(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel);
 }
 
@@ -2032,20 +2033,20 @@ rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel);
 }
 
@@ -2054,9 +2055,9 @@ rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_on)(dev);
 }
 
@@ -2065,9 +2066,9 @@ rte_eth_led_off(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_off)(dev);
 }
 
@@ -2101,17 +2102,17 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	int index;
 	uint64_t pool_mask;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
 
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
 	if (pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
+		RTE_PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -2119,7 +2120,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	if (index < 0) {
 		index = get_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 				port_id);
 			return -ENOSPC;
 		}
@@ -2149,13 +2150,13 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
 	struct rte_eth_dev *dev;
 	int index;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
 
 	index = get_mac_addr_index(port_id, addr);
 	if (index == 0) {
-		PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
 		return -EADDRINUSE;
 	} else if (index < 0)
 		return 0;  /* Do nothing if address wasn't found */
@@ -2177,13 +2178,13 @@ rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (!is_valid_assigned_ether_addr(addr))
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
 
 	/* Update default address in NIC data structure */
 	ether_addr_copy(addr, &dev->data->mac_addrs[0]);
@@ -2201,22 +2202,22 @@ rte_eth_dev_set_vf_rxmode(uint8_t port_id,  uint16_t vf,
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
 		return -EINVAL;
 	}
 
 	if (rx_mode == 0) {
-		PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx_mode)(dev, vf, rx_mode, on);
 }
 
@@ -2251,11 +2252,11 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
@@ -2267,20 +2268,20 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 
 	if (index < 0) {
 		if (!on) {
-			PMD_DEBUG_TRACE("port %d: the MAC address was not "
+			RTE_PMD_DEBUG_TRACE("port %d: the MAC address was not "
 				"set in UTA\n", port_id);
 			return -EINVAL;
 		}
 
 		index = get_hash_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 					port_id);
 			return -ENOSPC;
 		}
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
 	ret = (*dev->dev_ops->uc_hash_table_set)(dev, addr, on);
 	if (ret == 0) {
 		/* Update address in NIC data structure */
@@ -2300,11 +2301,11 @@ rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
 	return (*dev->dev_ops->uc_all_hash_table_set)(dev, on);
 }
 
@@ -2315,18 +2316,18 @@ rte_eth_dev_set_vf_rx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx)(dev, vf, on);
 }
 
@@ -2337,18 +2338,18 @@ rte_eth_dev_set_vf_tx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_tx)(dev, vf, on);
 }
 
@@ -2358,22 +2359,22 @@ rte_eth_dev_set_vf_vlan_filter(uint8_t port_id, uint16_t vlan_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
 	if (vlan_id > ETHER_MAX_VLAN_ID) {
-		PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
 			vlan_id);
 		return -EINVAL;
 	}
 
 	if (vf_mask == 0) {
-		PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_vlan_filter)(dev, vlan_id,
 						   vf_mask, vlan_on);
 }
@@ -2385,26 +2386,26 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx,
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_link link;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (queue_idx > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("set queue rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:port %d: "
 				"invalid queue id=%d\n", port_id, queue_idx);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 			tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_queue_rate_limit)(dev, queue_idx, tx_rate);
 }
 
@@ -2418,26 +2419,26 @@ int rte_eth_set_vf_rate_limit(uint8_t port_id, uint16_t vf, uint16_t tx_rate,
 	if (q_msk == 0)
 		return 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (vf > dev_info.max_vfs) {
-		PMD_DEBUG_TRACE("set VF rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:port %d: "
 				"invalid vf id=%d\n", port_id, vf);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 				tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rate_limit)(dev, vf, tx_rate, q_msk);
 }
 
@@ -2448,14 +2449,14 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (mirror_conf->rule_type == 0) {
-		PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
+		RTE_PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
 				ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
@@ -2463,18 +2464,18 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
 	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
-		PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
-		PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_set)(dev, mirror_conf, rule_id, on);
 }
@@ -2484,10 +2485,10 @@ rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_reset)(dev, rule_id);
 }
@@ -2499,12 +2500,12 @@ rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
@@ -2517,13 +2518,13 @@ rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id],
@@ -2535,10 +2536,10 @@ rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
 	return (*dev->dev_ops->rx_queue_count)(dev, queue_id);
 }
 
@@ -2547,10 +2548,10 @@ rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
 	return (*dev->dev_ops->rx_descriptor_done)(dev->data->rx_queues[queue_id],
 						   offset);
 }
@@ -2567,7 +2568,7 @@ rte_eth_dev_callback_register(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2607,7 +2608,7 @@ rte_eth_dev_callback_unregister(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2670,14 +2671,14 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
@@ -2685,7 +2686,7 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 		vec = intr_handle->intr_vec[qid];
 		rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 		if (rc && rc != -EEXIST) {
-			PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+			RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 					" op %d epfd %d vec %u\n",
 					port_id, qid, op, epfd, vec);
 		}
@@ -2728,26 +2729,26 @@ rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
 		return -EINVAL;
 	}
 
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
 	vec = intr_handle->intr_vec[queue_id];
 	rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 	if (rc && rc != -EEXIST) {
-		PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+		RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 				" op %d epfd %d vec %u\n",
 				port_id, queue_id, op, epfd, vec);
 		return rc;
@@ -2763,13 +2764,13 @@ rte_eth_dev_rx_intr_enable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_enable)(dev, queue_id);
 }
 
@@ -2780,13 +2781,13 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
 }
 
@@ -2795,10 +2796,10 @@ int rte_eth_dev_bypass_init(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
 	(*dev->dev_ops->bypass_init)(dev);
 	return 0;
 }
@@ -2808,10 +2809,10 @@ rte_eth_dev_bypass_state_show(uint8_t port_id, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_show)(dev, state);
 	return 0;
 }
@@ -2821,10 +2822,10 @@ rte_eth_dev_bypass_state_set(uint8_t port_id, uint32_t *new_state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_set)(dev, new_state);
 	return 0;
 }
@@ -2834,10 +2835,10 @@ rte_eth_dev_bypass_event_show(uint8_t port_id, uint32_t event, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_show)(dev, event, state);
 	return 0;
 }
@@ -2847,11 +2848,11 @@ rte_eth_dev_bypass_event_store(uint8_t port_id, uint32_t event, uint32_t state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_set)(dev, event, state);
 	return 0;
 }
@@ -2861,11 +2862,11 @@ rte_eth_dev_wd_timeout_store(uint8_t port_id, uint32_t timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_set)(dev, timeout);
 	return 0;
 }
@@ -2875,11 +2876,11 @@ rte_eth_dev_bypass_ver_show(uint8_t port_id, uint32_t *ver)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_ver_show)(dev, ver);
 	return 0;
 }
@@ -2889,11 +2890,11 @@ rte_eth_dev_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_show)(dev, wd_timeout);
 	return 0;
 }
@@ -2903,11 +2904,11 @@ rte_eth_dev_bypass_wd_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_reset)(dev);
 	return 0;
 }
@@ -2918,10 +2919,10 @@ rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type,
 				RTE_ETH_FILTER_NOP, NULL);
 }
@@ -2932,10 +2933,10 @@ rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type, filter_op, arg);
 }
 
@@ -3105,18 +3106,18 @@ rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
@@ -3129,18 +3130,18 @@ rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
@@ -3154,10 +3155,10 @@ rte_eth_dev_set_mc_addr_list(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
 	return dev->dev_ops->set_mc_addr_list(dev, mc_addr_set, nb_mc_addr);
 }
 
@@ -3166,10 +3167,10 @@ rte_eth_timesync_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_enable)(dev);
 }
 
@@ -3178,10 +3179,10 @@ rte_eth_timesync_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_disable)(dev);
 }
 
@@ -3191,10 +3192,10 @@ rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_rx_timestamp)(dev, timestamp, flags);
 }
 
@@ -3203,10 +3204,10 @@ rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_tx_timestamp)(dev, timestamp);
 }
 
@@ -3215,10 +3216,10 @@ rte_eth_timesync_adjust_time(uint8_t port_id, int64_t delta)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_adjust_time, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_adjust_time, -ENOTSUP);
 	return (*dev->dev_ops->timesync_adjust_time)(dev, delta);
 }
 
@@ -3227,10 +3228,10 @@ rte_eth_timesync_read_time(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_time, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_time, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_time)(dev, timestamp);
 }
 
@@ -3239,10 +3240,10 @@ rte_eth_timesync_write_time(uint8_t port_id, const struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_write_time, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_write_time, -ENOTSUP);
 	return (*dev->dev_ops->timesync_write_time)(dev, timestamp);
 }
 
@@ -3251,10 +3252,10 @@ rte_eth_dev_get_reg_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
 	return (*dev->dev_ops->get_reg_length)(dev);
 }
 
@@ -3263,10 +3264,10 @@ rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
 	return (*dev->dev_ops->get_reg)(dev, info);
 }
 
@@ -3275,10 +3276,10 @@ rte_eth_dev_get_eeprom_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom_length)(dev);
 }
 
@@ -3287,10 +3288,10 @@ rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom)(dev, info);
 }
 
@@ -3299,10 +3300,10 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
 
@@ -3313,14 +3314,14 @@ rte_eth_dev_get_dcb_info(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
 	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
 }
 
@@ -3328,7 +3329,7 @@ void
 rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev)
 {
 	if ((eth_dev == NULL) || (pci_dev == NULL)) {
-		PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
+		RTE_PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
 				eth_dev, pci_dev);
 		return;
 	}
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-20 15:27               ` Olivier MATZ
  2015-11-20 17:26                 ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Olivier MATZ @ 2015-11-20 15:27 UTC (permalink / raw)
  To: Declan Doherty, dev

Hi Declan,

Please find some comments below.

On 11/13/2015 07:58 PM, Declan Doherty wrote:
> This library add support for adding a chain of offload operations to a
> mbuf. It contains the definition of the rte_mbuf_offload structure as
> well as helper functions for attaching  offloads to mbufs and a mempool
> management functions.
> 
> This initial implementation supports attaching multiple offload
> operations to a single mbuf, but only a single offload operation of a
> specific type can be attach to that mbuf.
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  MAINTAINERS                                        |   4 +
>  config/common_bsdapp                               |   6 +
>  config/common_linuxapp                             |   6 +
>  doc/api/doxy-api-index.md                          |   1 +
>  doc/api/doxy-api.conf                              |   1 +
>  lib/Makefile                                       |   1 +
>  lib/librte_mbuf/rte_mbuf.h                         |   6 +
>  lib/librte_mbuf_offload/Makefile                   |  52 ++++
>  lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
>  lib/librte_mbuf_offload/rte_mbuf_offload.h         | 302 +++++++++++++++++++++
>  .../rte_mbuf_offload_version.map                   |   7 +
>  mk/rte.app.mk                                      |   1 +
>  12 files changed, 487 insertions(+)
>  create mode 100644 lib/librte_mbuf_offload/Makefile
>  create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
>  create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
>  create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

The new files are called rte_mbuf_offload, but from what I understand,
it is more like a mbuf metadata api. What you call "offload operation"
is not called because an offload is attached, but because you call
rte_cryptodev_enqueue_burst() on it.

>From what I see, it is not said in the help of
rte_cryptodev_enqueue_burst() that the offload structure has to be
set in the given mbufs.

Is my understanding correct?


> +/**
> + * @file
> + * RTE mbuf offload
> + *
> + * The rte_mbuf_offload library provides the ability to specify a device generic
> + * off-load operation independent of the current Rx/Tx Ethernet offloads
> + * supported within the rte_mbuf structure, and add supports for multiple
> + * off-load operations and offload device types.
> + *
> + * The rte_mbuf_offload specifies the particular off-load operation type, such
> + * as a crypto operation, and provides a container for the operations
> + * parameter's inside the op union. These parameters are then used by the
> + * device which supports that operation to perform the specified offload.
> + *
> + * This library provides an API to create pre-allocated mempool of offload
> + * operations, with supporting allocate and free functions. It also provides
> + * APIs for attaching an offload to a mbuf, as well as an API to retrieve a
> + * specified offload type from an mbuf offload chain.
> + */
> +
> +#include <rte_mbuf.h>
> +#include <rte_crypto.h>
> +
> +
> +/** packet mbuf offload operation types */
> +enum rte_mbuf_ol_op_type {
> +	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
> +	/**< Off-load not specified */
> +	RTE_PKTMBUF_OL_CRYPTO
> +	/**< Crypto offload operation */
> +};

Here, the mbuf offload API is presented as a generic API for offload.
But it includes rte_crypto. Actually, it means that at the end this
API needs to be aware of any offload type.

Wouldn't it be possible to hide the knowledge of the metadata structure
to this API?


> +/**
> + * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
> + * one type in our chain of offloads.
> + *
> + * @param	m	packet mbuf.
> + * @param	ol	rte_mbuf_offload strucutre to be attached
> + *
> + * @returns
> + * - On success returns the pointer to the offload we just added
> + * - On failure returns NULL
> + */
> +static inline struct rte_mbuf_offload *
> +rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
> +{
> +	struct rte_mbuf_offload **ol_last;
> +
> +	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
> +			ol_last = &ol_last[0]->next)
> +		if (ol_last[0]->type == ol->type)
> +			return NULL;
> +
> +	ol_last[0] = ol;
> +	ol_last[0]->m = m;
> +	ol_last[0]->next = NULL;
> +
> +	return ol_last[0];
> +}

*ol_last would be clearer than ol_last[0] as it's not a table.
Here we expect that m->offload_ops == NULL at mbuf initialization.
I cannot find where it is initialized.

> +
> +
> +/** Rearms rte_mbuf_offload default parameters */
> +static inline void
> +__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
> +		enum rte_mbuf_ol_op_type type)
> +{
> +	ol->m = NULL;
> +	ol->type = type;
> +
> +	switch (type) {
> +	case RTE_PKTMBUF_OL_CRYPTO:
> +		__rte_crypto_op_reset(&ol->op.crypto); break;
> +	default:
> +		break;
> +	}
> +}

Would it work if several OL are registered?


Also, what is not clear to me is how the offload structure is freed.
For instance, I think that calling rte_pktmbuf_free(m) on a mbuf
that has a offload attached would result in a leak.

It would mean that it is not allowed to call any function that free or
reassemble a mbuf when an offload is attached.

I'm wondering if there is such a leak in l2fwd_crypto_send_burst():

	/* <<<<<<< here packets have the offload attached */
	ret = rte_cryptodev_enqueue_burst(cparams->dev_id,
		cparams->qp_id, pkt_buffer, (uint16_t) n);
	crypto_statistics[cparams->dev_id].enqueued += ret;
	if (unlikely(ret < n)) {
		crypto_statistics[cparams->dev_id].errors += (n - ret);
		do {
			/* <<<<<<<<< offload struct is lost? */
			rte_pktmbuf_free(pkt_buffer[ret]);
		} while (++ret < n);
	}


It seems that these offload structures are only used to pass crypto
info to the cryptodev. Would it be a problem to have an API like this?

  rx_burst(port, q, mbuf_tab, crypto_tab, n);

Or even:

  rx_burst(port, q, crypto_tab, n);

  with each *cryto_tab pointing to a mbuf


If we really want to use mbufs to store the crypto info (is it
something we want? for pipelining?), another idea would be to use
the usual metadata after the mbuf. It may require to enhance this
metadata framework to allow to better register room for metadata,
because today anyone using metadata expects that its metadata are
the only ones.

Pros:
 - no additional allocation for crypto offload struct
 - no risk of leak
Cons:
 - room is reserved for crypto in all mbufs even if no crypto
   is done on them


Regards,
Olivier

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-20 15:27               ` Olivier MATZ
@ 2015-11-20 17:26                 ` Declan Doherty
  2015-11-23  9:10                   ` Olivier MATZ
  0 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-20 17:26 UTC (permalink / raw)
  To: Olivier MATZ, dev

Hey Oliver, see reply inline.


On 20/11/15 15:27, Olivier MATZ wrote:
> Hi Declan,
>
> Please find some comments below.
>
> On 11/13/2015 07:58 PM, Declan Doherty wrote:
>> This library add support for adding a chain of offload operations to a
>> mbuf. It contains the definition of the rte_mbuf_offload structure as
>> well as helper functions for attaching  offloads to mbufs and a mempool
>> management functions.
>>
>> This initial implementation supports attaching multiple offload
>> operations to a single mbuf, but only a single offload operation of a
>> specific type can be attach to that mbuf.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>   MAINTAINERS                                        |   4 +
>>   config/common_bsdapp                               |   6 +
>>   config/common_linuxapp                             |   6 +
>>   doc/api/doxy-api-index.md                          |   1 +
>>   doc/api/doxy-api.conf                              |   1 +
>>   lib/Makefile                                       |   1 +
>>   lib/librte_mbuf/rte_mbuf.h                         |   6 +
>>   lib/librte_mbuf_offload/Makefile                   |  52 ++++
>>   lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
>>   lib/librte_mbuf_offload/rte_mbuf_offload.h         | 302 +++++++++++++++++++++
>>   .../rte_mbuf_offload_version.map                   |   7 +
>>   mk/rte.app.mk                                      |   1 +
>>   12 files changed, 487 insertions(+)
>>   create mode 100644 lib/librte_mbuf_offload/Makefile
>>   create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
>>   create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
>>   create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map
>
> The new files are called rte_mbuf_offload, but from what I understand,
> it is more like a mbuf metadata api. What you call "offload operation"
> is not called because an offload is attached, but because you call
> rte_cryptodev_enqueue_burst() on it.

Maybe rte_mbuf_offload_metadata would be a better name, if not a bit 
more long winded :) The idea of this API set is to give a generic 
framework for attaching the the offload operation meta data to a mbuf 
which will be retrieved at a later point, when the particular offload 
burst function is called. I guess as we only have a single offload 
device at the moment the API may look a little over the top!


>
>  From what I see, it is not said in the help of
> rte_cryptodev_enqueue_burst() that the offload structure has to be
> set in the given mbufs.
>
I need to update the API documentation to make that explicit.

> Is my understanding correct?
>
>
>> +/**
>> + * @file
>> + * RTE mbuf offload
>> + *
>> + * The rte_mbuf_offload library provides the ability to specify a device generic
>> + * off-load operation independent of the current Rx/Tx Ethernet offloads
>> + * supported within the rte_mbuf structure, and add supports for multiple
>> + * off-load operations and offload device types.
>> + *
>> + * The rte_mbuf_offload specifies the particular off-load operation type, such
>> + * as a crypto operation, and provides a container for the operations
>> + * parameter's inside the op union. These parameters are then used by the
>> + * device which supports that operation to perform the specified offload.
>> + *
>> + * This library provides an API to create pre-allocated mempool of offload
>> + * operations, with supporting allocate and free functions. It also provides
>> + * APIs for attaching an offload to a mbuf, as well as an API to retrieve a
>> + * specified offload type from an mbuf offload chain.
>> + */
>> +
>> +#include <rte_mbuf.h>
>> +#include <rte_crypto.h>
>> +
>> +
>> +/** packet mbuf offload operation types */
>> +enum rte_mbuf_ol_op_type {
>> +	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
>> +	/**< Off-load not specified */
>> +	RTE_PKTMBUF_OL_CRYPTO
>> +	/**< Crypto offload operation */
>> +};
>
> Here, the mbuf offload API is presented as a generic API for offload.
> But it includes rte_crypto. Actually, it means that at the end this
> API needs to be aware of any offload type.
>
> Wouldn't it be possible to hide the knowledge of the metadata structure
> to this API?

The design makes the offload API aware of the underlying offload 
operations, but I don't see this as a problem, the main idea was to have 
a no dependencies other than the structure pointer added to the 
rte_mbuf, while also making the offload extensible in the future. Also 
this approach makes the management of the offload elements very straight 
forward, with very simplified pool management functions and also there 
isn't the need for another level of indirection to the actual offload 
operation.

>
>
>> +/**
>> + * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
>> + * one type in our chain of offloads.
>> + *
>> + * @param	m	packet mbuf.
>> + * @param	ol	rte_mbuf_offload strucutre to be attached
>> + *
>> + * @returns
>> + * - On success returns the pointer to the offload we just added
>> + * - On failure returns NULL
>> + */
>> +static inline struct rte_mbuf_offload *
>> +rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
>> +{
>> +	struct rte_mbuf_offload **ol_last;
>> +
>> +	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
>> +			ol_last = &ol_last[0]->next)
>> +		if (ol_last[0]->type == ol->type)
>> +			return NULL;
>> +
>> +	ol_last[0] = ol;
>> +	ol_last[0]->m = m;
>> +	ol_last[0]->next = NULL;
>> +
>> +	return ol_last[0];
>> +}
>
> *ol_last would be clearer than ol_last[0] as it's not a table.

Just a personal preference, but I can submit a change later.

> Here we expect that m->offload_ops == NULL at mbuf initialization.
> I cannot find where it is initialized.
>

I'll need to add the initialization to rte_pktmbuf_reset()

>> +
>> +
>> +/** Rearms rte_mbuf_offload default parameters */
>> +static inline void
>> +__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
>> +		enum rte_mbuf_ol_op_type type)
>> +{
>> +	ol->m = NULL;
>> +	ol->type = type;
>> +
>> +	switch (type) {
>> +	case RTE_PKTMBUF_OL_CRYPTO:
>> +		__rte_crypto_op_reset(&ol->op.crypto); break;
>> +	default:
>> +		break;
>> +	}
>> +}
>
> Would it work if several OL are registered?
>

I can't see any reason why it wouldn't

>
> Also, what is not clear to me is how the offload structure is freed.
> For instance, I think that calling rte_pktmbuf_free(m) on a mbuf
> that has a offload attached would result in a leak.
>
> It would mean that it is not allowed to call any function that free or
> reassemble a mbuf when an offload is attached.

We just need to walk the chain of offloads calling 
rte_pktmbuf_offload_free(), before freeing the mbuf, which will be an 
issue with any externally attached meta data. In the case of 
reassembling I don't see why we would just move the chain to the head mbuf.

>
> I'm wondering if there is such a leak in l2fwd_crypto_send_burst():
>
> 	/* <<<<<<< here packets have the offload attached */
> 	ret = rte_cryptodev_enqueue_burst(cparams->dev_id,
> 		cparams->qp_id, pkt_buffer, (uint16_t) n);
> 	crypto_statistics[cparams->dev_id].enqueued += ret;
> 	if (unlikely(ret < n)) {
> 		crypto_statistics[cparams->dev_id].errors += (n - ret);
> 		do {
> 			/* <<<<<<<<< offload struct is lost? */
> 			rte_pktmbuf_free(pkt_buffer[ret]);
> 		} while (++ret < n);
> 	}
>
>
I'll push a fix for this leak, I just noticed it myself this morning.


> It seems that these offload structures are only used to pass crypto
> info to the cryptodev. Would it be a problem to have an API like this?
>
>    rx_burst(port, q, mbuf_tab, crypto_tab, n);
>

I really dislike this option, there's no direct linkage between mbuf and 
offload operation.

> Or even:
>
>    rx_burst(port, q, crypto_tab, n);
>
>    with each *cryto_tab pointing to a mbuf
>

I looked at this but it would really hamstring any pipelining 
applications which might want to attach multiple offloads to a mbuf at a 
point in the pipeline for processing at later steps.
>
> If we really want to use mbufs to store the crypto info (is it
> something we want? for pipelining?), another idea would be to use
> the usual metadata after the mbuf. It may require to enhance this
> metadata framework to allow to better register room for metadata,
> because today anyone using metadata expects that its metadata are
> the only ones.
>
> Pros:
>   - no additional allocation for crypto offload struct
>   - no risk of leak
> Cons:
>   - room is reserved for crypto in all mbufs even if no crypto
>     is done on them
>
I have some further API to add in R2.3 which would allow the offload to 
allocated in the private data of the mbuf but they are not fully tested 
as yet. Also I don't think it's practical from a memory management point 
of view to make the assumption that there will always be enough space in 
the mbufs private data, especially if a number of offloads were to be 
attached to a single mbuf.

>
> Regards,
> Olivier
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-20 17:26                 ` Declan Doherty
@ 2015-11-23  9:10                   ` Olivier MATZ
  2015-11-23 11:52                     ` Ananyev, Konstantin
  0 siblings, 1 reply; 115+ messages in thread
From: Olivier MATZ @ 2015-11-23  9:10 UTC (permalink / raw)
  To: Declan Doherty, dev

Hi Declan,

On 11/20/2015 06:26 PM, Declan Doherty wrote:
>> The new files are called rte_mbuf_offload, but from what I understand,
>> it is more like a mbuf metadata api. What you call "offload operation"
>> is not called because an offload is attached, but because you call
>> rte_cryptodev_enqueue_burst() on it.
> 
> Maybe rte_mbuf_offload_metadata would be a better name, if not a bit
> more long winded :) The idea of this API set is to give a generic
> framework for attaching the the offload operation meta data to a mbuf
> which will be retrieved at a later point, when the particular offload
> burst function is called. I guess as we only have a single offload
> device at the moment the API may look a little over the top!

Indeed, not sure rte_mbuf_offload_metadata is better.
I'm still not convinced that offload should appear in the name, it
is a bit confusing with hardware offloads (ol_flags). Also, it
suggests that a work is delegated to another entity, but for instance
in this case it is just used as a storage area:

	ol = rte_pktmbuf_offload_alloc(pool, RTE_PKTMBUF_OL_CRYPTO);
	rte_crypto_op_attach_session(&ol->op.crypto, session);
	ol->op.crypto.digest.data = rte_pktmbuf_append(m, digest_len);
	ol->op.crypto.digest.phys_addr = ...;
	/* ... */
	rte_pktmbuf_offload_attach(m, ol);
	ret = rte_cryptodev_enqueue_burst(dev_id, qp_id, &m, 1);

Do you have some other examples in mind that would use this API?

>>> +/** Rearms rte_mbuf_offload default parameters */
>>> +static inline void
>>> +__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
>>> +        enum rte_mbuf_ol_op_type type)
>>> +{
>>> +    ol->m = NULL;
>>> +    ol->type = type;
>>> +
>>> +    switch (type) {
>>> +    case RTE_PKTMBUF_OL_CRYPTO:
>>> +        __rte_crypto_op_reset(&ol->op.crypto); break;
>>> +    default:
>>> +        break;
>>> +    }
>>> +}
>>
>> Would it work if several OL are registered?
> 
> I can't see any reason why it wouldn't

Sorry, I read it to quickly. I thought it was a
rte_pktmbuf_offload_detach() function. By the way there is no
such function?


>> Also, what is not clear to me is how the offload structure is freed.
>> For instance, I think that calling rte_pktmbuf_free(m) on a mbuf
>> that has a offload attached would result in a leak.
>>
>> It would mean that it is not allowed to call any function that free or
>> reassemble a mbuf when an offload is attached.
> 
> We just need to walk the chain of offloads calling
> rte_pktmbuf_offload_free(), before freeing the mbuf, which will be an
> issue with any externally attached meta data. In the case of
> reassembling I don't see why we would just move the chain to the head mbuf.

Walking the chain of offload + adding the initialization will probably
have a little cost that should be evaluated.

The freeing is probably not the only problem:
- packet duplication: are the offload infos copied? If no, it means that
  the copy is not exactly a copy
- if you chain 2 mbufs that both have offload info attached, how does it
  behave?
- if you prepend a segment to your mbuf, you need to copy the mbuf
  offload pointer, and also parse the list of offload to update the
  ol->m pointer of each element.

>> It seems that these offload structures are only used to pass crypto
>> info to the cryptodev. Would it be a problem to have an API like this?
>>
>>    rx_burst(port, q, mbuf_tab, crypto_tab, n);
>>
> 
> I really dislike this option, there's no direct linkage between mbuf and
> offload operation.
> 
>> Or even:
>>
>>    rx_burst(port, q, crypto_tab, n);
>>
>>    with each *cryto_tab pointing to a mbuf
>>
> 
> I looked at this but it would really hamstring any pipelining
> applications which might want to attach multiple offloads to a mbuf at a
> point in the pipeline for processing at later steps.

As far as I can see, there is already a way to chain several crypto
operations in the crypto structure.

Another option is to use the mbuf offload API (or the m->userdata
pointer which already works for that) only in the application:

- the field is completely ignored by the mbuf api and the dpdk driver
- it is up to the application to initialize it and free it

-> it can only be used when passing mbuf from a part of the app
   to another, so it perfectly matches the pipeline use case.

Example:

app_core1:

  /* receive a mbuf */
  crypto = alloc()
  crypto->xxx = yyy:
  /* ... */
  m->userdata = crypto;
  enqueue(m, app_core2);

app_core2:

  m = dequeue();
  crypto = m->userdata;
  m->userdata = NULL;
  /* api is tx_burst(port, q, mbuf_tab, crypto_tab, n) */
  tx_burst(port, q, &m, &crypto, 1);



Regards,
Olivier

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23  9:10                   ` Olivier MATZ
@ 2015-11-23 11:52                     ` Ananyev, Konstantin
  2015-11-23 12:16                       ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Ananyev, Konstantin @ 2015-11-23 11:52 UTC (permalink / raw)
  To: Olivier MATZ, Doherty, Declan, dev

Hi Olivier,

> On 11/20/2015 06:26 PM, Declan Doherty wrote:
> >> The new files are called rte_mbuf_offload, but from what I understand,
> >> it is more like a mbuf metadata api. What you call "offload operation"
> >> is not called because an offload is attached, but because you call
> >> rte_cryptodev_enqueue_burst() on it.
> >
> > Maybe rte_mbuf_offload_metadata would be a better name, if not a bit
> > more long winded :) The idea of this API set is to give a generic
> > framework for attaching the the offload operation meta data to a mbuf
> > which will be retrieved at a later point, when the particular offload
> > burst function is called. I guess as we only have a single offload
> > device at the moment the API may look a little over the top!
> 
> Indeed, not sure rte_mbuf_offload_metadata is better.
> I'm still not convinced that offload should appear in the name, it
> is a bit confusing with hardware offloads (ol_flags). Also, it
> suggests that a work is delegated to another entity, but for instance
> in this case it is just used as a storage area:
> 
> 	ol = rte_pktmbuf_offload_alloc(pool, RTE_PKTMBUF_OL_CRYPTO);
> 	rte_crypto_op_attach_session(&ol->op.crypto, session);
> 	ol->op.crypto.digest.data = rte_pktmbuf_append(m, digest_len);
> 	ol->op.crypto.digest.phys_addr = ...;
> 	/* ... */
> 	rte_pktmbuf_offload_attach(m, ol);
> 	ret = rte_cryptodev_enqueue_burst(dev_id, qp_id, &m, 1);
> 
> Do you have some other examples in mind that would use this API?
> 
> >>> +/** Rearms rte_mbuf_offload default parameters */
> >>> +static inline void
> >>> +__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
> >>> +        enum rte_mbuf_ol_op_type type)
> >>> +{
> >>> +    ol->m = NULL;
> >>> +    ol->type = type;
> >>> +
> >>> +    switch (type) {
> >>> +    case RTE_PKTMBUF_OL_CRYPTO:
> >>> +        __rte_crypto_op_reset(&ol->op.crypto); break;
> >>> +    default:
> >>> +        break;
> >>> +    }
> >>> +}
> >>
> >> Would it work if several OL are registered?
> >
> > I can't see any reason why it wouldn't
> 
> Sorry, I read it to quickly. I thought it was a
> rte_pktmbuf_offload_detach() function. By the way there is no
> such function?
> 
> 
> >> Also, what is not clear to me is how the offload structure is freed.
> >> For instance, I think that calling rte_pktmbuf_free(m) on a mbuf
> >> that has a offload attached would result in a leak.
> >>
> >> It would mean that it is not allowed to call any function that free or
> >> reassemble a mbuf when an offload is attached.
> >
> > We just need to walk the chain of offloads calling
> > rte_pktmbuf_offload_free(), before freeing the mbuf, which will be an
> > issue with any externally attached meta data. In the case of
> > reassembling I don't see why we would just move the chain to the head mbuf.
> 
> Walking the chain of offload + adding the initialization will probably
> have a little cost that should be evaluated.
> 
> The freeing is probably not the only problem:
> - packet duplication: are the offload infos copied? If no, it means that
>   the copy is not exactly a copy
> - if you chain 2 mbufs that both have offload info attached, how does it
>   behave?
> - if you prepend a segment to your mbuf, you need to copy the mbuf
>   offload pointer, and also parse the list of offload to update the
>   ol->m pointer of each element.
> 
> >> It seems that these offload structures are only used to pass crypto
> >> info to the cryptodev. Would it be a problem to have an API like this?
> >>
> >>    rx_burst(port, q, mbuf_tab, crypto_tab, n);
> >>
> >
> > I really dislike this option, there's no direct linkage between mbuf and
> > offload operation.
> >
> >> Or even:
> >>
> >>    rx_burst(port, q, crypto_tab, n);
> >>
> >>    with each *cryto_tab pointing to a mbuf
> >>
> >
> > I looked at this but it would really hamstring any pipelining
> > applications which might want to attach multiple offloads to a mbuf at a
> > point in the pipeline for processing at later steps.
> 
> As far as I can see, there is already a way to chain several crypto
> operations in the crypto structure.
> 
> Another option is to use the mbuf offload API (or the m->userdata
> pointer which already works for that) only in the application:
> 
> - the field is completely ignored by the mbuf api and the dpdk driver
> - it is up to the application to initialize it and free it
> 
> -> it can only be used when passing mbuf from a part of the app
>    to another, so it perfectly matches the pipeline use case.

I don't think we should start to re-use userdata.
Userdata was intended for the upper layer app to pass/store it's
private data associated with mbuf, and we probably should keep it this way.
While mbuf_offload (or whatever we'll name it) supposed to keep data
necessary for crypto (and other future type of devices) to operate with mbuf.

All your comments above about that this new field is just ignored by most mbuf
operations (copy/attach/append/free, etc) are valid.
By I suppose for now we can just state that for mbuf_offload is not supported by
all these mbuf ops.
I understand that current API is probably not perfect and might need to be revised in future.
Again, as it is completely new feature, I suspect it would be a lot of change requests for it anyway. 
But we need some generic way to pass other (not ethdev) specific information within mbuf
between user-level and PMDs for non-ethernet devices (and probably between different PMDs too).

Konstantin

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23 11:52                     ` Ananyev, Konstantin
@ 2015-11-23 12:16                       ` Declan Doherty
  2015-11-23 13:08                         ` Olivier MATZ
  0 siblings, 1 reply; 115+ messages in thread
From: Declan Doherty @ 2015-11-23 12:16 UTC (permalink / raw)
  To: Ananyev, Konstantin, Olivier MATZ, dev

On 23/11/15 11:52, Ananyev, Konstantin wrote:
> Hi Olivier,
>
>> On 11/20/2015 06:26 PM, Declan Doherty wrote:
>>>> The new files are called rte_mbuf_offload, but from what I understand,
>>>> it is more like a mbuf metadata api. What you call "offload operation"
>>>> is not called because an offload is attached, but because you call
>>>> rte_cryptodev_enqueue_burst() on it.
>>>
>>> Maybe rte_mbuf_offload_metadata would be a better name, if not a bit
>>> more long winded :) The idea of this API set is to give a generic
>>> framework for attaching the the offload operation meta data to a mbuf
>>> which will be retrieved at a later point, when the particular offload
>>> burst function is called. I guess as we only have a single offload
>>> device at the moment the API may look a little over the top!
>>
>> Indeed, not sure rte_mbuf_offload_metadata is better.
>> I'm still not convinced that offload should appear in the name, it
>> is a bit confusing with hardware offloads (ol_flags). Also, it
>> suggests that a work is delegated to another entity, but for instance
>> in this case it is just used as a storage area:
>>
>> 	ol = rte_pktmbuf_offload_alloc(pool, RTE_PKTMBUF_OL_CRYPTO);
>> 	rte_crypto_op_attach_session(&ol->op.crypto, session);
>> 	ol->op.crypto.digest.data = rte_pktmbuf_append(m, digest_len);
>> 	ol->op.crypto.digest.phys_addr = ...;
>> 	/* ... */
>> 	rte_pktmbuf_offload_attach(m, ol);
>> 	ret = rte_cryptodev_enqueue_burst(dev_id, qp_id, &m, 1);
>>
>> Do you have some other examples in mind that would use this API?
>>
>>>>> +/** Rearms rte_mbuf_offload default parameters */
>>>>> +static inline void
>>>>> +__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
>>>>> +        enum rte_mbuf_ol_op_type type)
>>>>> +{
>>>>> +    ol->m = NULL;
>>>>> +    ol->type = type;
>>>>> +
>>>>> +    switch (type) {
>>>>> +    case RTE_PKTMBUF_OL_CRYPTO:
>>>>> +        __rte_crypto_op_reset(&ol->op.crypto); break;
>>>>> +    default:
>>>>> +        break;
>>>>> +    }
>>>>> +}
>>>>
>>>> Would it work if several OL are registered?
>>>
>>> I can't see any reason why it wouldn't
>>
>> Sorry, I read it to quickly. I thought it was a
>> rte_pktmbuf_offload_detach() function. By the way there is no
>> such function?
>>
>>
>>>> Also, what is not clear to me is how the offload structure is freed.
>>>> For instance, I think that calling rte_pktmbuf_free(m) on a mbuf
>>>> that has a offload attached would result in a leak.
>>>>
>>>> It would mean that it is not allowed to call any function that free or
>>>> reassemble a mbuf when an offload is attached.
>>>
>>> We just need to walk the chain of offloads calling
>>> rte_pktmbuf_offload_free(), before freeing the mbuf, which will be an
>>> issue with any externally attached meta data. In the case of
>>> reassembling I don't see why we would just move the chain to the head mbuf.
>>
>> Walking the chain of offload + adding the initialization will probably
>> have a little cost that should be evaluated.
>>
>> The freeing is probably not the only problem:
>> - packet duplication: are the offload infos copied? If no, it means that
>>    the copy is not exactly a copy
>> - if you chain 2 mbufs that both have offload info attached, how does it
>>    behave?
>> - if you prepend a segment to your mbuf, you need to copy the mbuf
>>    offload pointer, and also parse the list of offload to update the
>>    ol->m pointer of each element.
>>
>>>> It seems that these offload structures are only used to pass crypto
>>>> info to the cryptodev. Would it be a problem to have an API like this?
>>>>
>>>>     rx_burst(port, q, mbuf_tab, crypto_tab, n);
>>>>
>>>
>>> I really dislike this option, there's no direct linkage between mbuf and
>>> offload operation.
>>>
>>>> Or even:
>>>>
>>>>     rx_burst(port, q, crypto_tab, n);
>>>>
>>>>     with each *cryto_tab pointing to a mbuf
>>>>
>>>
>>> I looked at this but it would really hamstring any pipelining
>>> applications which might want to attach multiple offloads to a mbuf at a
>>> point in the pipeline for processing at later steps.
>>
>> As far as I can see, there is already a way to chain several crypto
>> operations in the crypto structure.
>>
>> Another option is to use the mbuf offload API (or the m->userdata
>> pointer which already works for that) only in the application:
>>
>> - the field is completely ignored by the mbuf api and the dpdk driver
>> - it is up to the application to initialize it and free it
>>
>> -> it can only be used when passing mbuf from a part of the app
>>     to another, so it perfectly matches the pipeline use case.
>
> I don't think we should start to re-use userdata.
> Userdata was intended for the upper layer app to pass/store it's
> private data associated with mbuf, and we probably should keep it this way.
> While mbuf_offload (or whatever we'll name it) supposed to keep data
> necessary for crypto (and other future type of devices) to operate with mbuf.
>
> All your comments above about that this new field is just ignored by most mbuf
> operations (copy/attach/append/free, etc) are valid.
> By I suppose for now we can just state that for mbuf_offload is not supported by
> all these mbuf ops.
> I understand that current API is probably not perfect and might need to be revised in future.
> Again, as it is completely new feature, I suspect it would be a lot of change requests for it anyway.
> But we need some generic way to pass other (not ethdev) specific information within mbuf
> between user-level and PMDs for non-ethernet devices (and probably between different PMDs too).
>
> Konstantin
>


Just to re-iterate Konstantin's point above. I'm aware that a number of 
mbuf operations currently are not supported and are not aware of the 
rte_mbuf_offload, but we will be continuing development throughout 2.3 
and beyond to address this. Also I hope we will get much more community 
feedback and interaction so we can get a more definite feature set for 
the library, and  by minimizing the features in this release, it will 
allow more flexibility on how we can develop this part of handling 
offloaded operations and it will limit the ABI issues we will face if it 
turns out we need to change this going forwards.

Thanks
Declan

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23 12:16                       ` Declan Doherty
@ 2015-11-23 13:08                         ` Olivier MATZ
  2015-11-23 14:17                           ` Thomas Monjalon
  2015-11-23 14:33                           ` Declan Doherty
  0 siblings, 2 replies; 115+ messages in thread
From: Olivier MATZ @ 2015-11-23 13:08 UTC (permalink / raw)
  To: Declan Doherty, Ananyev, Konstantin, dev

Hi,

On 11/23/2015 01:16 PM, Declan Doherty wrote:
>> I don't think we should start to re-use userdata.
>> Userdata was intended for the upper layer app to pass/store it's
>> private data associated with mbuf, and we probably should keep it this
>> way.

If the crypto API PMD takes both mbuf and crypto instead of just the
mbuf, the mbuf_offload and the userdata wouldn't be very different,
as the metadata would stay inside the application.

So, if it's application private (the dpdk does not know how to handle
it in case of mbuf free, dup, ...), the content of it should be
app-specific.

>> While mbuf_offload (or whatever we'll name it) supposed to keep data
>> necessary for crypto (and other future type of devices) to operate
>> with mbuf.

Any idea of future devices?

>> All your comments above about that this new field is just ignored by
>> most mbuf
>> operations (copy/attach/append/free, etc) are valid.
>> By I suppose for now we can just state that for mbuf_offload is not
>> supported by
>> all these mbuf ops.

So, who is responsible of freeing the mbuf offload metadata?
The crypto pmd ?

It means that as soon as the mbuf offload is added to the mbuf, it is
not possible to call any other dpdk function that would process the
mbuf... if we cannot call anything else before, why not just passing
the crypto argument as a parameter?

Managing offload data would even be more complex in the future if there
are more than one mbuf_offload attached to the mbuf.

>> I understand that current API is probably not perfect and might need
>> to be revised in future.

The problem is that it's not easy to change the dpdk API now.

>> Again, as it is completely new feature, I suspect it would be a lot of
>> change requests for it anyway.
>> But we need some generic way to pass other (not ethdev) specific
>> information within mbuf
>> between user-level and PMDs for non-ethernet devices (and probably
>> between different PMDs too).

If a crypto PMD needs a crypto info, why not adding a crypto argument?
I feel it's clearer from a user point of view.

About PMD to PMD metadata, do you have any use case?


> Just to re-iterate Konstantin's point above. I'm aware that a number of
> mbuf operations currently are not supported and are not aware of the
> rte_mbuf_offload, but we will be continuing development throughout 2.3
> and beyond to address this. Also I hope we will get much more community
> feedback and interaction so we can get a more definite feature set for
> the library, and  by minimizing the features in this release, it will
> allow more flexibility on how we can develop this part of handling
> offloaded operations and it will limit the ABI issues we will face if it
> turns out we need to change this going forwards.

I can see the amount of work you've done for making the cryptodev
happen in dpdk. I also recognize that I didn't make comments very early.

If we really want to have this feature in the next release, maybe there
is a way to mark it as experimental, meaning that the API is subject to
change? What do you think?

Thomas, any comment?

Regards,
Olivier

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23 13:08                         ` Olivier MATZ
@ 2015-11-23 14:17                           ` Thomas Monjalon
  2015-11-23 14:46                             ` Thomas Monjalon
  2015-11-23 14:33                           ` Declan Doherty
  1 sibling, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-23 14:17 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev

2015-11-23 14:08, Olivier MATZ:
> 2015-11-23 12:16, Declan Doherty:
> > 2015-11-23 11:52, Ananyev, Konstantin:
> >> I understand that current API is probably not perfect and might need
> >> to be revised in future.
> 
> The problem is that it's not easy to change the dpdk API now.
[...]
> > Just to re-iterate Konstantin's point above. I'm aware that a number of
> > mbuf operations currently are not supported and are not aware of the
> > rte_mbuf_offload, but we will be continuing development throughout 2.3
> > and beyond to address this. Also I hope we will get much more community
> > feedback and interaction so we can get a more definite feature set for
> > the library, and  by minimizing the features in this release, it will
> > allow more flexibility on how we can develop this part of handling
> > offloaded operations and it will limit the ABI issues we will face if it
> > turns out we need to change this going forwards.
> 
> I can see the amount of work you've done for making the cryptodev
> happen in dpdk. I also recognize that I didn't make comments very early.
> 
> If we really want to have this feature in the next release, maybe there
> is a way to mark it as experimental, meaning that the API is subject to
> change? What do you think?
> 
> Thomas, any comment?

Yes, it is a totally new work and it probably needs more time to have a
design working well for most of use cases.
As I already discussed with Olivier, I think it should be considered as
experimental. It means we can try it but do not consider it as a stable
API. So the deprecation process would not apply until the experimental
flag is removed.

For the release 2.2, it would be better to remove the crypto dependency
in mbuf. Do you think it is possible?

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23 13:08                         ` Olivier MATZ
  2015-11-23 14:17                           ` Thomas Monjalon
@ 2015-11-23 14:33                           ` Declan Doherty
  1 sibling, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-23 14:33 UTC (permalink / raw)
  To: Olivier MATZ, Ananyev, Konstantin, dev

On 23/11/15 13:08, Olivier MATZ wrote:
> Hi,
>
> On 11/23/2015 01:16 PM, Declan Doherty wrote:
>>> I don't think we should start to re-use userdata.
>>> Userdata was intended for the upper layer app to pass/store it's
>>> private data associated with mbuf, and we probably should keep it this
>>> way.
>
> If the crypto API PMD takes both mbuf and crypto instead of just the
> mbuf, the mbuf_offload and the userdata wouldn't be very different,
> as the metadata would stay inside the application.
>
> So, if it's application private (the dpdk does not know how to handle
> it in case of mbuf free, dup, ...), the content of it should be
> app-specific.
>
>>> While mbuf_offload (or whatever we'll name it) supposed to keep data
>>> necessary for crypto (and other future type of devices) to operate
>>> with mbuf.
>
> Any idea of future devices?

Well the QAT hardware supports compression, so I guess that might be a 
likely candidate for future work.

>
>>> All your comments above about that this new field is just ignored by
>>> most mbuf
>>> operations (copy/attach/append/free, etc) are valid.
>>> By I suppose for now we can just state that for mbuf_offload is not
>>> supported by
>>> all these mbuf ops.
>
> So, who is responsible of freeing the mbuf offload metadata?
> The crypto pmd ?
>
> It means that as soon as the mbuf offload is added to the mbuf, it is
> not possible to call any other dpdk function that would process the
> mbuf... if we cannot call anything else before, why not just passing
> the crypto argument as a parameter?
>
> Managing offload data would even be more complex in the future if there
> are more than one mbuf_offload attached to the mbuf.
>
>>> I understand that current API is probably not perfect and might need
>>> to be revised in future.
>
> The problem is that it's not easy to change the dpdk API now.
>
>>> Again, as it is completely new feature, I suspect it would be a lot of
>>> change requests for it anyway.
>>> But we need some generic way to pass other (not ethdev) specific
>>> information within mbuf
>>> between user-level and PMDs for non-ethernet devices (and probably
>>> between different PMDs too).
>
> If a crypto PMD needs a crypto info, why not adding a crypto argument?
> I feel it's clearer from a user point of view.
>
> About PMD to PMD metadata, do you have any use case?

One use case that comes to mind would be the enablement of inline IPsec. 
To enable this you would need a mechanism to attach the IPsec metadata, 
ESP header offsets etc to an mbuf which the PMD can then use to 
correctly complete the tx descriptor.


>
>
>> Just to re-iterate Konstantin's point above. I'm aware that a number of
>> mbuf operations currently are not supported and are not aware of the
>> rte_mbuf_offload, but we will be continuing development throughout 2.3
>> and beyond to address this. Also I hope we will get much more community
>> feedback and interaction so we can get a more definite feature set for
>> the library, and  by minimizing the features in this release, it will
>> allow more flexibility on how we can develop this part of handling
>> offloaded operations and it will limit the ABI issues we will face if it
>> turns out we need to change this going forwards.
>
> I can see the amount of work you've done for making the cryptodev
> happen in dpdk. I also recognize that I didn't make comments very early.
>
> If we really want to have this feature in the next release, maybe there
> is a way to mark it as experimental, meaning that the API is subject to
> change? What do you think?

I wouldn't be against this idea. I think a mechanism for marking new 
libraries / APIs as experimental for their initial release could be very 
useful to allow new features to be introduced with the flexibility to 
make further API changes on user feedback without the overhead of ABI 
management in the subsequent release.

>
> Thomas, any comment?
>
> Regards,
> Olivier
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23 14:17                           ` Thomas Monjalon
@ 2015-11-23 14:46                             ` Thomas Monjalon
  2015-11-23 15:47                               ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-23 14:46 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-23 15:17, Thomas Monjalon:
> Yes, it is a totally new work and it probably needs more time to have a
> design working well for most of use cases.
> As I already discussed with Olivier, I think it should be considered as
> experimental. It means we can try it but do not consider it as a stable
> API. So the deprecation process would not apply until the experimental
> flag is removed.

If nobody complains, I'll apply this v7 and will send a patch to add some
experimental markers.

> For the release 2.2, it would be better to remove the crypto dependency
> in mbuf. Do you think it is possible?

Sorry, forget it, the dependency is in mbuf_offload.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-23 14:46                             ` Thomas Monjalon
@ 2015-11-23 15:47                               ` Declan Doherty
  0 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-23 15:47 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On 23/11/15 14:46, Thomas Monjalon wrote:
> 2015-11-23 15:17, Thomas Monjalon:
>> Yes, it is a totally new work and it probably needs more time to have a
>> design working well for most of use cases.
>> As I already discussed with Olivier, I think it should be considered as
>> experimental. It means we can try it but do not consider it as a stable
>> API. So the deprecation process would not apply until the experimental
>> flag is removed.
>
> If nobody complains, I'll apply this v7 and will send a patch to add some
> experimental markers.
>
>> For the release 2.2, it would be better to remove the crypto dependency
>> in mbuf. Do you think it is possible?
>
> Sorry, forget it, the dependency is in mbuf_offload.
>

That sounds good to me, thanks Thomas!

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 04/10] mbuf: add new marcos to get the physical address of data
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
@ 2015-11-25  0:25               ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25  0:25 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-13 18:58, Declan Doherty:
> +/**
> + * A macro that returns the physical address that points to the start of the
> + * data in the mbuf
> + *
> + * @param m
> + *   The packet mbuf.
> + * @param o
> + *   The offset into the data to calculate address from.
> + */
> +#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)

The parameter o does not exist.
Wrong copy paste, I guess.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-25  0:32               ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25  0:32 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-13 18:58, Declan Doherty:
>  lib/librte_cryptodev/rte_crypto.h              |  613 +++++++++++++
>  lib/librte_cryptodev/rte_cryptodev.c           | 1092 ++++++++++++++++++++++++
>  lib/librte_cryptodev/rte_cryptodev.h           |  649 ++++++++++++++

Doxygen reports some errors:

lib/librte_cryptodev/rte_crypto.h:565: warning: unable to resolve reference to `rte_crypto_hash_setup_data' for \ref command

lib/librte_cryptodev/rte_crypto.h:585: warning: argument 'm' of command @param is not found in the argument list of __rte_crypto_op_reset(struct rte_crypto_op *op)

lib/librte_cryptodev/rte_crypto.h:589: warning: unable to resolve reference to `rte_crypto_hash_params' for \ref command                                                                                                          

lib/librte_cryptodev/rte_crypto.h:593: warning: The following parameters of __rte_crypto_op_reset(struct rte_crypto_op *op) are not documented:

lib/librte_cryptodev/rte_cryptodev.h:241: warning: argument 'nb_qp_queue' of command @param is not found in the argument list of rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)

lib/librte_cryptodev/rte_cryptodev.h:256: warning: The following parameters of rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config) are not documented:

lib/librte_cryptodev/rte_cryptodev.h:563: warning: argument 'queue_id' of command @param is not found in the argument list of rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, struct rte_mbuf **pkts, uint16_t nb_pkts)

lib/librte_cryptodev/rte_cryptodev.h:563: warning: argument 'tx_pkts' of command @param is not found in the argument list of rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, struct rte_mbuf **pkts, uint16_t nb_pkts)

lib/librte_cryptodev/rte_cryptodev.h:593: warning: The following parameters of rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, struct rte_mbuf **pkts, uint16_t nb_pkts) are not documented:

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-25  1:00               ` Thomas Monjalon
  2015-11-25  9:16                 ` Mcnamara, John
  2015-11-25 10:34               ` Thomas Monjalon
  2015-11-25 12:01               ` Mcnamara, John
  2 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25  1:00 UTC (permalink / raw)
  To: john.mcnamara; +Cc: dev

2015-11-13 18:58, Declan Doherty:
> +Crypto Device Drivers
> +====================================

It is a quite long underlining :)

It raise a question to me:
John, have you reviewed the crypto docs?

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 10/10] l2fwd-crypto: crypto
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 10/10] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-25  1:03               ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25  1:03 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-13 18:58, Declan Doherty:
> +               printf("\nStatistics for cryptodev %lu -------------------------"
> +                          "\nPackets enqueued: %28"PRIu64
> +                          "\nPackets dequeued: %28"PRIu64
> +                          "\nPackets errors: %30"PRIu64,
> +                          cdevid,
> +                          crypto_statistics[cdevid].enqueued,
> +                          crypto_statistics[cdevid].dequeued,
> +                          crypto_statistics[cdevid].errors);

There is a compilation error on 32-bit:

examples/l2fwd-crypto/main.c:252:10:
error: format ‘%lu’ expects argument of type ‘long unsigned int’,
but argument 2 has type ‘uint64_t {aka long long unsigned int}’

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-25  1:00               ` Thomas Monjalon
@ 2015-11-25  9:16                 ` Mcnamara, John
  0 siblings, 0 replies; 115+ messages in thread
From: Mcnamara, John @ 2015-11-25  9:16 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, November 25, 2015 1:01 AM
> To: Mcnamara, John
> Cc: dev@dpdk.org; Doherty, Declan
> Subject: Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new
> QAT DPDK PMD.
> 
> 2015-11-13 18:58, Declan Doherty:
> > +Crypto Device Drivers
> > +====================================
> 
> It is a quite long underlining :)
> 
> It raise a question to me:
> John, have you reviewed the crypto docs?

No. I'll do it now.

John.
-- 

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-11-25 10:32               ` Thomas Monjalon
  0 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25 10:32 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-13 18:58, Declan Doherty:
> +To build DPKD with the AESNI_MB_PMD the user is required to download the library
> +from `here <https://downloadcenter.intel.com/download/22972>`_ and compile it on
> +their user system before building DPDK.

Maybe it is worth saying that yasm must be installed
and compilation is done with "make YASM=yasm" (because of a hard-coded path).

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
  2015-11-25  1:00               ` Thomas Monjalon
@ 2015-11-25 10:34               ` Thomas Monjalon
  2015-11-25 10:49                 ` Thomas Monjalon
  2015-11-25 12:01               ` Mcnamara, John
  2 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25 10:34 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-13 18:58, Declan Doherty:
> +Build and install the SRIOV-enabled QAT driver
> +
> +.. code-block:: console
> +
> +    "mkdir /QAT; cd /QAT"
> +    copy qatmux.l.2.3.0-34.tgz to this location
> +    "tar zxof qatmux.l.2.3.0-34.tgz"
> +    "export ICP_WITHOUT_IOMMU=1"
> +    "./installer.sh install QAT1.6 host"

People may want to install QAT in a specific directory to just test
build regression.
Is there an easy way to do it?

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-25 10:34               ` Thomas Monjalon
@ 2015-11-25 10:49                 ` Thomas Monjalon
  2015-11-25 10:59                   ` Declan Doherty
  0 siblings, 1 reply; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25 10:49 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-25 11:34, Thomas Monjalon:
> 2015-11-13 18:58, Declan Doherty:
> > +Build and install the SRIOV-enabled QAT driver
> > +
> > +.. code-block:: console
> > +
> > +    "mkdir /QAT; cd /QAT"
> > +    copy qatmux.l.2.3.0-34.tgz to this location
> > +    "tar zxof qatmux.l.2.3.0-34.tgz"
> > +    "export ICP_WITHOUT_IOMMU=1"
> > +    "./installer.sh install QAT1.6 host"
> 
> People may want to install QAT in a specific directory to just test
> build regression.
> Is there an easy way to do it?

For reference, I use this script:

tar xf qatmux-2.5.0-80/QAT1.6/QAT1.6.L.2.5.0-80.tar.gz
export ICP_ROOT=$(readlink -e $qat_dir)
export ICP_ENV_DIR=$ICP_ROOT/quickassist/build_system/build_files/env_files
export ICP_TOOLS_TARGET="accelcomp"                                                                              
make -C $ICP_ROOT/quickassist

And it fails here:

qat-1.6/quickassist/adf/include/icp_adf_transport_dp.h:118:18:
error: inlining failed in call to always_inline ‘icp_adf_pollQueue’:
function body not available

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-25 10:49                 ` Thomas Monjalon
@ 2015-11-25 10:59                   ` Declan Doherty
  0 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 10:59 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On 25/11/15 10:49, Thomas Monjalon wrote:
> 2015-11-25 11:34, Thomas Monjalon:
>> 2015-11-13 18:58, Declan Doherty:
>>> +Build and install the SRIOV-enabled QAT driver
>>> +
>>> +.. code-block:: console
>>> +
>>> +    "mkdir /QAT; cd /QAT"
>>> +    copy qatmux.l.2.3.0-34.tgz to this location
>>> +    "tar zxof qatmux.l.2.3.0-34.tgz"
>>> +    "export ICP_WITHOUT_IOMMU=1"
>>> +    "./installer.sh install QAT1.6 host"
>>
>> People may want to install QAT in a specific directory to just test
>> build regression.
>> Is there an easy way to do it?
>
> For reference, I use this script:
>
> tar xf qatmux-2.5.0-80/QAT1.6/QAT1.6.L.2.5.0-80.tar.gz
> export ICP_ROOT=$(readlink -e $qat_dir)
> export ICP_ENV_DIR=$ICP_ROOT/quickassist/build_system/build_files/env_files
> export ICP_TOOLS_TARGET="accelcomp"
> make -C $ICP_ROOT/quickassist
>
> And it fails here:
>
> qat-1.6/quickassist/adf/include/icp_adf_transport_dp.h:118:18:
> error: inlining failed in call to always_inline ‘icp_adf_pollQueue’:
> function body not available
>


Hey Thomas, I'm just following up with the team on this. There is no 
actual build dependency on the QAT PMD to have the driver installed, 
it's only required for management of the PF and allocation of VF's which 
are subsequently used within DPDK. The only external header dependencies 
outside of DPDK should be on openssl/libcrypto.

Declan

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
  2015-11-25  1:00               ` Thomas Monjalon
  2015-11-25 10:34               ` Thomas Monjalon
@ 2015-11-25 12:01               ` Mcnamara, John
  2 siblings, 0 replies; 115+ messages in thread
From: Mcnamara, John @ 2015-11-25 12:01 UTC (permalink / raw)
  To: Doherty, Declan, dev

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
> Sent: Friday, November 13, 2015 6:58 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT
> DPDK PMD.
> 
> This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
> hardware accelerator.




> +Crypto Device Drivers
> +====================================
> +
> +|today|
> +
> +
> +**Contents**
> +

It is best to omit the |today| since it isn't generally useful. Also, **Content** is added automatically in the PDF out and not really required in Html output so that can be omitted as well.





> +Quick Assist Crypto Poll Mode Driver
> +====================================
> +
> +The QAT PMD provides poll mode crypto driver support for **Intel
> +QuickAssist Technology DH895xxC** hardware accelerator. QAT PMD has
> +current been tested on Fedora 21 64-bit with gcc and on the 4.3
> kernel.org

Typo: current(ly) but it is probably clearer without that word.



> +Features
> +--------
> +QAT PMD has support for:
> +
> +Cipher algorithms:
> +* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
> +* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
> +* RTE_CRYPTO_SYM_CIPHER_AES512_CBC

The list needs to be separated from the previous line with a blank line to render correctly. It would also be worth rending the algorithms as fixed width text.

Cipher algorithms:

* ``RTE_CRYPTO_SYM_CIPHER_AES128_CBC``
* ``RTE_CRYPTO_SYM_CIPHER_AES256_CBC``
* ``RTE_CRYPTO_SYM_CIPHER_AES512_CBC``

Same comments for next paragraph.




> +Installation
> +------------
> +To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required.
> +The VF devices exposed by this driver will be used by QAT PMD.
> +
> +If you are running on kernel 4.3 or greater, see instructions for
> "Installation using
> +kernel.org QAT driver".  If you're on a kernel earlier than 4.3, see
> "Installation using the
> +01.org QAT driver".

The section references don't match the section names. These could also be links like this:

If you are running on kernel 4.3 or greater, see instructions for `Installation using
kernel.org driver`_ below. If you're on a kernel earlier than 4.3, see `Installation using
01.org QAT driver`_.



> +Compiling the 01.org driver - notes:
> +If using a later kernel and the build fails with an error relating to
> strict_stroul not being available patch the following file:
> +
> +.. code-block:: console

You could "use code-block:: diff" here to render the patch nicely in the docs.



> +If build fails due to missing header files you may need to do following:
> +  *  sudo yum install zlib-devel
> +  *  sudo yum install openssl-devel

Probably should be rendered as a code block with ::


> +
> +Installation using kernel.org driver
> +------------------------------------
> +
> +Assuming you are running on at least a 4.3 kernel, you can use the stock
> kernel.org QAT
> +driver to start the QAT hardware.
> +
> +Steps below assume
> +  * running DPDK on a platform with one DH895xCC device
> +  * on a kernel at least version 4.3
> +
> +In BIOS ensure that SRIOV is enabled and VT-d is disabled.
> +
> +Ensure the QAT driver is loaded on your system, by executing:
> +    lsmod | grep qat

The commands in this section should be rendered with ::

Ensure the QAT driver is loaded on your system, by executing::

    lsmod | grep qat



> +Binding the available VFs to the DPDK UIO driver
> +------------------------------------------------
> +The unbind command below assumes bdfs of 03:01.00-03:04.07, if yours are
> different adjust the unbind command below.
> +
> +Make available to DPDK
> +
> +.. code-block:: console
> +
> +   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
> +   "modprobe uio"
> +   "insmod ./build/kmod/igb_uio.ko"
> +   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n
> 0000:03:0${device}.${fn} >
> /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind;done ;done"
> +   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
> +

This is too long to be rendered in PDF. Something like the following would work better in the docs while being functionally the same:

The unbind command below assumes ``bdfs`` of ``03:01.00-03:04.07``, if yours are different adjust the unbind command below::

   cd $RTE_SDK
   modprobe uio
   insmod ./build/kmod/igb_uio.ko

   for device in $(seq 1 4); do \
       for fn in $(seq 0 7); do \
           echo -n 0000:03:0${device}.${fn} > \
           /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind; \
       done; \
   done

   echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id

You can use ``lspci -vvd:443`` to confirm that all devices are now in use by igb_uio kernel driver.


John.
-- 

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework
  2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
                               ` (9 preceding siblings ...)
  2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 10/10] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-25 13:25             ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
                                 ` (10 more replies)
  10 siblings, 11 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

This series of patches defines a set of application burst oriented APIs for
asynchronous symmetric cryptographic functions within DPDK. It also contains a
poll mode driver cryptographic device framework for the implementation of
crypto devices within DPDK.

In the patch set we also have included 2 reference implementations of crypto
PMDs. Currently both implementations support AES-CBC with
HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
software PMD based on Intel's multi-buffer library, which utilises both
AES-NI instructions and vector operations to accelerate crypto operations and
the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to
provide hardware accelerated crypto operations.

The API set supports two functional modes of operation:

1, A session oriented mode. In this mode the user creates a crypto session
which defines all the immutable data required to perform a particular crypto
operation in advance, including cipher/hash algorithms and operations to be
performed as well as the keys to used etc. The session is then referenced by
the crypto operation data structure which is a data structure specific to each
mbuf. It is contains all mutable data about the crypto operation to be
performed, such as data offsets and lengths into the mbuf's data payload for
cipher and hash operations to be performed.

2, A session-less mode. In this mode the user is able to provision crypto
operations on an mbuf without the need to have a cached session created in
advance, but at the cost of entailing the overhead of calculating
authentication pre-computes and preforming key expansions in-line with the
crypto operation. The crypto xform chain is directly attached to the op struct
in this mode, so the op struct now contains all of the immutable crypto
operation parameters that would be normally set within a session. Once all
mutable and immutable parameters are set the crypto operation data structure
can be attached to the specified mbuf and enqueued on a specified crypto device
for processing.

The patch set contains the following features:
 - Crypto device APIs and device framework
 - Implementation of a software crypto PMD based on multi-buffer library
 - Implementation of a hardware crypto PMD baed on Intel QAT(DH895xxC)
 - Unit and performance test's which give and example of utilising the crypto
 API's.
 - Sample application which performs crypto operations on the IP payload of the
   packets being forwarded

Current Status:
There is no support for chained mbuf's and as mentioned above the PMD's
have currently implemented support for AES128-CBC/AES192-CBC/AES256-CBC
and HMAC_SHA1/SHA256/SHA512 and AES_XCBC_MAC.

v8:
  - Doxygen comment fix for rte_pktmbuf_mtophys macro
  - Doxygen fixes for public headers in rte_crypto.h
  - QAT documentation tidy up based on J. McNamara comments
  - Detailed requirement to set YASM path when building multi-buffer  library
  - l2fwd-crypto: fix for 32-bit build; fix for possible memory leak if 
    rte_cryptodev_burst_enqueue fails; and handling for failure to  allocate
    rte_mbuf_offload.

v7:
  - Fix typos in commit message of eal: add __rte_packed /__rte_aligned macros patch
  - Include rte_mbuf_offload in doxygen build and updates file comments to clarify lib,
    usage. Also moved clean which was in wrong patch into this
    rte_mbuf_offload patch.
  - Tidy up map file for cryptodev library.
  - Add l2fwdc-crypto to main examples makefile.
v6:
  - Fix 32-bit build issue caused by casting in new  rte_pktmbuf_mtophys_offset macro
  - Fix truncation of log message by new  rte_pmd_debug_trace inline function

v5:
  - Making ethdev marcos for function pointer and port id checking public and
    available for use in by the cryptodev. The intialise to patches combine changes
    from original cryptodev patch and discussion in
    http://dpdk.org/ml/archives/dev/2015-November/027871.html
  - Split out changes to create new __rte_packed and __rte_aligned macros 
    into seperate patches form the main  cryptodev patch set for clairty
  - further code cleaning, removal of currently unsupported gcm code from  aesni_mb pmd
 
v4:
  - Some more EOF whitespace and  checkpatch fixes

v3:
  - Fixes a document build error, which I missed in the V2
  - Fixes for remaining checkpatch errors
  - Disables QAT and AESNI_MB PMD being build by default as they have external 
    library dependences 

v2: 
 - Introduces a new library to support attaching offload operations to a mbuf
 - Remove unused APIs from cryptodev
 - PMD code refactor due to  new rte_mbuf_offload structure
 - General bug fixes and code tidy up


Declan Doherty (10):
  ethdev: rename macros to have RTE_ prefix
  ethdev: make error checking macros public
  eal: add __rte_packed /__rte_aligned macros
  mbuf: add new marcos to get the physical address of data
  cryptodev: Initial DPDK Crypto APIs and device framework release
  mbuf_offload: library to support attaching offloads to a mbuf
  qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  aesni_mb_pmd: Initial implementation of multi buffer based crypto
    device
  app/test: add cryptodev unit and performance tests
  l2fwd-crypto: crypto

 MAINTAINERS                                        |   14 +
 app/test/Makefile                                  |    4 +
 app/test/test.c                                    |   92 +-
 app/test/test.h                                    |   34 +-
 app/test/test_cryptodev.c                          | 1986 +++++++++++++++++++
 app/test/test_cryptodev.h                          |   68 +
 app/test/test_cryptodev_perf.c                     | 2062 ++++++++++++++++++++
 app/test/test_link_bonding.c                       |    6 +-
 app/test/test_link_bonding_mode4.c                 |    7 +-
 app/test/test_link_bonding_rssconf.c               |    7 +-
 config/common_bsdapp                               |   37 +-
 config/common_linuxapp                             |   37 +-
 doc/api/doxy-api-index.md                          |    2 +
 doc/api/doxy-api.conf                              |    2 +
 doc/guides/cryptodevs/aesni_mb.rst                 |   85 +
 doc/guides/cryptodevs/index.rst                    |   39 +
 doc/guides/cryptodevs/qat.rst                      |  219 +++
 doc/guides/index.rst                               |    1 +
 drivers/Makefile                                   |    1 +
 drivers/crypto/Makefile                            |   38 +
 drivers/crypto/aesni_mb/Makefile                   |   63 +
 drivers/crypto/aesni_mb/aesni_mb_ops.h             |  210 ++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |  669 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     |  298 +++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h |  229 +++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |    3 +
 drivers/crypto/qat/Makefile                        |   63 +
 .../qat/qat_adf/adf_transport_access_macros.h      |  174 ++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            |  316 +++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         |  404 ++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            |  306 +++
 drivers/crypto/qat/qat_adf/qat_algs.h              |  125 ++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   |  601 ++++++
 drivers/crypto/qat/qat_crypto.c                    |  561 ++++++
 drivers/crypto/qat/qat_crypto.h                    |  124 ++
 drivers/crypto/qat/qat_logs.h                      |   78 +
 drivers/crypto/qat/qat_qp.c                        |  429 ++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |    3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             |  137 ++
 examples/Makefile                                  |    1 +
 examples/l2fwd-crypto/Makefile                     |   50 +
 examples/l2fwd-crypto/main.c                       | 1489 ++++++++++++++
 lib/Makefile                                       |    2 +
 lib/librte_cryptodev/Makefile                      |   60 +
 lib/librte_cryptodev/rte_crypto.h                  |  610 ++++++
 lib/librte_cryptodev/rte_cryptodev.c               | 1092 +++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |  651 ++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h           |  549 ++++++
 lib/librte_cryptodev/rte_cryptodev_version.map     |   32 +
 lib/librte_eal/common/include/rte_dev.h            |   53 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eal/common/include/rte_memory.h         |   14 +-
 lib/librte_ether/rte_ethdev.c                      |  619 +++---
 lib/librte_ether/rte_ethdev.h                      |   26 +
 lib/librte_mbuf/rte_mbuf.h                         |   27 +
 lib/librte_mbuf_offload/Makefile                   |   52 +
 lib/librte_mbuf_offload/rte_mbuf_offload.c         |  100 +
 lib/librte_mbuf_offload/rte_mbuf_offload.h         |  302 +++
 .../rte_mbuf_offload_version.map                   |    7 +
 mk/rte.app.mk                                      |    9 +
 60 files changed, 14891 insertions(+), 389 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 01/10] ethdev: rename macros to have RTE_ prefix
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 02/10] ethdev: make error checking macros public Declan Doherty
                                 ` (9 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

The macros to check that the function pointers and port ids are valid
for an ethdev are potentially useful to have in a common headers for
use with all PMDs. However, since they would then become externally
visible, we apply the RTE_ & RTE_ETH_ prefix to them as approtiate.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

---
 lib/librte_ether/rte_ethdev.c | 607 +++++++++++++++++++++---------------------
 1 file changed, 304 insertions(+), 303 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b19ac9a..71775dc 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -70,58 +70,59 @@
 #include "rte_ethdev.h"
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define PMD_DEBUG_TRACE(fmt, args...) do {                        \
+#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
 		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
 	} while (0)
 #else
-#define PMD_DEBUG_TRACE(fmt, args...)
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
 #endif
 
 /* Macros for checking for restricting functions to primary instance only */
-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define PROC_PRIMARY_OR_RET() do { \
+#define RTE_PROC_PRIMARY_OR_RET() do { \
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for invalid function pointers in dev_ops structure */
-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return (retval); \
 	} \
 } while (0)
 
-#define FUNC_PTR_OR_RET(func) do { \
+#define RTE_FUNC_PTR_OR_RET(func) do { \
 	if ((func) == NULL) { \
-		PMD_DEBUG_TRACE("Function not supported\n"); \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
 		return; \
 	} \
 } while (0)
 
 /* Macros to check for valid port */
-#define VALID_PORTID_OR_ERR_RET(port_id, retval) do {		\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval;					\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) {  \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
 } while (0)
 
-#define VALID_PORTID_OR_RET(port_id) do {			\
-	if (!rte_eth_dev_is_valid_port(port_id)) {		\
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return;						\
-	}							\
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
 } while (0)
 
+
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
@@ -244,7 +245,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 
 	port_id = rte_eth_dev_find_free_port();
 	if (port_id == RTE_MAX_ETHPORTS) {
-		PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
+		RTE_PMD_DEBUG_TRACE("Reached maximum number of Ethernet ports\n");
 		return NULL;
 	}
 
@@ -252,7 +253,7 @@ rte_eth_dev_allocate(const char *name, enum rte_eth_dev_type type)
 		rte_eth_dev_data_alloc();
 
 	if (rte_eth_dev_allocated(name) != NULL) {
-		PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
+		RTE_PMD_DEBUG_TRACE("Ethernet Device with name %s already allocated!\n",
 				name);
 		return NULL;
 	}
@@ -339,7 +340,7 @@ rte_eth_dev_init(struct rte_pci_driver *pci_drv,
 	if (diag == 0)
 		return 0;
 
-	PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
+	RTE_PMD_DEBUG_TRACE("driver %s: eth_dev_init(vendor_id=0x%u device_id=0x%x) failed\n",
 			pci_drv->name,
 			(unsigned) pci_dev->id.vendor_id,
 			(unsigned) pci_dev->id.device_id);
@@ -447,10 +448,10 @@ rte_eth_dev_get_device_type(uint8_t port_id)
 static int
 rte_eth_dev_get_addr_by_port(uint8_t port_id, struct rte_pci_addr *addr)
 {
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -463,10 +464,10 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name)
 {
 	char *tmp;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -483,7 +484,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id)
 	int i;
 
 	if (name == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -509,7 +510,7 @@ rte_eth_dev_get_port_by_addr(const struct rte_pci_addr *addr, uint8_t *port_id)
 	struct rte_pci_device *pci_dev = NULL;
 
 	if (addr == NULL) {
-		PMD_DEBUG_TRACE("Null pointer is specified\n");
+		RTE_PMD_DEBUG_TRACE("Null pointer is specified\n");
 		return -EINVAL;
 	}
 
@@ -536,7 +537,7 @@ rte_eth_dev_is_detachable(uint8_t port_id)
 	uint32_t dev_flags;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
@@ -735,7 +736,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
 		rxq = dev->data->rx_queues;
 
@@ -766,20 +767,20 @@ rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -796,20 +797,20 @@ rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
 
 	if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			rx_queue_id, port_id);
 		return 0;
@@ -826,20 +827,20 @@ rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already started\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -856,20 +857,20 @@ rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
 
 	if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) {
-		PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with port_id=%" PRIu8
 			" already stopped\n",
 			tx_queue_id, port_id);
 		return 0;
@@ -895,7 +896,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 			return -(ENOMEM);
 		}
 	} else { /* re-configure */
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
 		txq = dev->data->tx_queues;
 
@@ -929,19 +930,19 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of RX queues requested (%u) is greater than max supported(%d)\n",
 			nb_rx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 			"Number of TX queues requested (%u) is greater than max supported(%d)\n",
 			nb_tx_q, RTE_MAX_QUEUES_PER_PORT);
 		return -EINVAL;
@@ -949,11 +950,11 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
@@ -965,22 +966,22 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
 	if (nb_rx_q > dev_info.max_rx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_queues=%d > %d\n",
 				port_id, nb_rx_q, dev_info.max_rx_queues);
 		return -EINVAL;
 	}
 	if (nb_rx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
 	if (nb_tx_q > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_queues=%d > %d\n",
 				port_id, nb_tx_q, dev_info.max_tx_queues);
 		return -EINVAL;
 	}
 	if (nb_tx_q == 0) {
-		PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
+		RTE_PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
 		return -EINVAL;
 	}
 
@@ -993,7 +994,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	if ((dev_conf->intr_conf.lsc == 1) &&
 		(!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) {
-			PMD_DEBUG_TRACE("driver %s does not support lsc\n",
+			RTE_PMD_DEBUG_TRACE("driver %s does not support lsc\n",
 					dev->data->drv_name);
 			return -EINVAL;
 	}
@@ -1005,14 +1006,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (dev_conf->rxmode.jumbo_frame == 1) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" > max valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
 				(unsigned)dev_info.max_rx_pktlen);
 			return -EINVAL;
 		} else if (dev_conf->rxmode.max_rx_pkt_len < ETHER_MIN_LEN) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
+			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
 				" < min valid value %u\n",
 				port_id,
 				(unsigned)dev_conf->rxmode.max_rx_pkt_len,
@@ -1032,14 +1033,14 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	diag = rte_eth_dev_rx_queue_config(dev, nb_rx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_rx_queue_config = %d\n",
 				port_id, diag);
 		return diag;
 	}
 
 	diag = rte_eth_dev_tx_queue_config(dev, nb_tx_q);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d rte_eth_dev_tx_queue_config = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		return diag;
@@ -1047,7 +1048,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
+		RTE_PMD_DEBUG_TRACE("port%d dev_configure = %d\n",
 				port_id, diag);
 		rte_eth_dev_rx_queue_config(dev, 0);
 		rte_eth_dev_tx_queue_config(dev, 0);
@@ -1086,7 +1087,7 @@ rte_eth_dev_config_restore(uint8_t port_id)
 			(dev->data->mac_pool_sel[i] & (1ULL << pool)))
 			(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
 		else {
-			PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
 					port_id);
 			/* exit the loop but not return an error */
 			break;
@@ -1114,16 +1115,16 @@ rte_eth_dev_start(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
 
 	if (dev->data->dev_started != 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already started\n",
 			port_id);
 		return 0;
@@ -1138,7 +1139,7 @@ rte_eth_dev_start(uint8_t port_id)
 	rte_eth_dev_config_restore(port_id);
 
 	if (dev->data->dev_conf.intr_conf.lsc == 0) {
-		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->link_update, -ENOTSUP);
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 	return 0;
@@ -1151,15 +1152,15 @@ rte_eth_dev_stop(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
 
 	if (dev->data->dev_started == 0) {
-		PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
+		RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8
 			" already stopped\n",
 			port_id);
 		return;
@@ -1176,13 +1177,13 @@ rte_eth_dev_set_link_up(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_up, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_up)(dev);
 }
 
@@ -1193,13 +1194,13 @@ rte_eth_dev_set_link_down(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_set_link_down, -ENOTSUP);
 	return (*dev->dev_ops->dev_set_link_down)(dev);
 }
 
@@ -1210,12 +1211,12 @@ rte_eth_dev_close(uint8_t port_id)
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_RET();
+	RTE_PROC_PRIMARY_OR_RET();
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_close)(dev);
 
@@ -1238,24 +1239,24 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
 
 	/*
 	 * Check the size of the mbuf data buffer.
@@ -1264,7 +1265,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	 */
 	rte_eth_dev_info_get(port_id, &dev_info);
 	if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
-		PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
+		RTE_PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
 				mp->name, (int) mp->private_data_size,
 				(int) sizeof(struct rte_pktmbuf_pool_private));
 		return -ENOSPC;
@@ -1272,7 +1273,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	mbp_buf_size = rte_pktmbuf_data_room_size(mp);
 
 	if ((mbp_buf_size - RTE_PKTMBUF_HEADROOM) < dev_info.min_rx_bufsize) {
-		PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
+		RTE_PMD_DEBUG_TRACE("%s mbuf_data_room_size %d < %d "
 				"(RTE_PKTMBUF_HEADROOM=%d + min_rx_bufsize(dev)"
 				"=%d)\n",
 				mp->name,
@@ -1288,7 +1289,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
 			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
 
-		PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+		RTE_PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
 			"should be: <= %hu, = %hu, and a product of %hu\n",
 			nb_rx_desc,
 			dev_info.rx_desc_lim.nb_max,
@@ -1321,24 +1322,24 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 
 	/* This function is only safe when called from the primary process
 	 * in a multi-process setup*/
-	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	if (tx_queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
 		return -EINVAL;
 	}
 
 	if (dev->data->dev_started) {
-		PMD_DEBUG_TRACE(
+		RTE_PMD_DEBUG_TRACE(
 		    "port %d must be stopped to allow configuration\n", port_id);
 		return -EBUSY;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
 
 	rte_eth_dev_info_get(port_id, &dev_info);
 
@@ -1354,10 +1355,10 @@ rte_eth_promiscuous_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
 	(*dev->dev_ops->promiscuous_enable)(dev);
 	dev->data->promiscuous = 1;
 }
@@ -1367,10 +1368,10 @@ rte_eth_promiscuous_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_disable);
 	dev->data->promiscuous = 0;
 	(*dev->dev_ops->promiscuous_disable)(dev);
 }
@@ -1380,7 +1381,7 @@ rte_eth_promiscuous_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->promiscuous;
@@ -1391,10 +1392,10 @@ rte_eth_allmulticast_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_enable);
 	(*dev->dev_ops->allmulticast_enable)(dev);
 	dev->data->all_multicast = 1;
 }
@@ -1404,10 +1405,10 @@ rte_eth_allmulticast_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->allmulticast_disable);
 	dev->data->all_multicast = 0;
 	(*dev->dev_ops->allmulticast_disable)(dev);
 }
@@ -1417,7 +1418,7 @@ rte_eth_allmulticast_get(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	return dev->data->all_multicast;
@@ -1442,13 +1443,13 @@ rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 1);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1459,13 +1460,13 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.intr_conf.lsc != 0)
 		rte_eth_dev_atomic_read_link_status(dev, eth_link);
 	else {
-		FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
+		RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
 		(*dev->dev_ops->link_update)(dev, 0);
 		*eth_link = dev->data->dev_link;
 	}
@@ -1476,12 +1477,12 @@ rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	memset(stats, 0, sizeof(*stats));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
 	(*dev->dev_ops->stats_get)(dev, stats);
 	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 	return 0;
@@ -1492,10 +1493,10 @@ rte_eth_stats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
 	(*dev->dev_ops->stats_reset)(dev);
 }
 
@@ -1510,7 +1511,7 @@ rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstats *xstats,
 	signed xcount = 0;
 	uint64_t val, *stats_ptr;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 
@@ -1584,7 +1585,7 @@ rte_eth_xstats_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	/* implemented by the driver */
@@ -1603,11 +1604,11 @@ set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_stats_mapping_set, -ENOTSUP);
 	return (*dev->dev_ops->queue_stats_mapping_set)
 			(dev, queue_id, stat_idx, is_rx);
 }
@@ -1641,14 +1642,14 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
 		.nb_align = 1,
 	};
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 
 	memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
 	dev_info->rx_desc_lim = lim;
 	dev_info->tx_desc_lim = lim;
 
-	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
 	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
 	dev_info->pci_dev = dev->pci_dev;
 	dev_info->driver_name = dev->data->drv_name;
@@ -1659,7 +1660,7 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_RET(port_id);
+	RTE_ETH_VALID_PORTID_OR_RET(port_id);
 	dev = &rte_eth_devices[port_id];
 	ether_addr_copy(&dev->data->mac_addrs[0], mac_addr);
 }
@@ -1670,7 +1671,7 @@ rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	*mtu = dev->data->mtu;
@@ -1683,9 +1684,9 @@ rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu)
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
 
 	ret = (*dev->dev_ops->mtu_set)(dev, mtu);
 	if (!ret)
@@ -1699,19 +1700,19 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
-		PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
 
 	if (vlan_id > 4095) {
-		PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
+		RTE_PMD_DEBUG_TRACE("(port_id=%d) invalid vlan_id=%u > 4095\n",
 				port_id, (unsigned) vlan_id);
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_filter_set, -ENOTSUP);
 
 	return (*dev->dev_ops->vlan_filter_set)(dev, vlan_id, on);
 }
@@ -1721,14 +1722,14 @@ rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int o
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid rx_queue_id=%d\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_strip_queue_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_strip_queue_set)(dev, rx_queue_id, on);
 
 	return 0;
@@ -1739,9 +1740,9 @@ rte_eth_dev_set_vlan_ether_type(uint8_t port_id, uint16_t tpid)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_tpid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_tpid_set)(dev, tpid);
 
 	return 0;
@@ -1755,7 +1756,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	int mask = 0;
 	int cur, org = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	/*check which option changed by application*/
@@ -1784,7 +1785,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 	if (mask == 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -1796,7 +1797,7 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	struct rte_eth_dev *dev;
 	int ret = 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
@@ -1816,9 +1817,9 @@ rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_pvid_set, -ENOTSUP);
 	(*dev->dev_ops->vlan_pvid_set)(dev, pvid, on);
 
 	return 0;
@@ -1829,9 +1830,9 @@ rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
 	memset(fc_conf, 0, sizeof(*fc_conf));
 	return (*dev->dev_ops->flow_ctrl_get)(dev, fc_conf);
 }
@@ -1841,14 +1842,14 @@ rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) {
-		PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid send_xon, only 0/1 allowed\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_set, -ENOTSUP);
 	return (*dev->dev_ops->flow_ctrl_set)(dev, fc_conf);
 }
 
@@ -1857,9 +1858,9 @@ rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, struct rte_eth_pfc_conf *pfc
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
-		PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
+		RTE_PMD_DEBUG_TRACE("Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
 
@@ -1880,7 +1881,7 @@ rte_eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (reta_size != RTE_ALIGN(reta_size, RTE_RETA_GROUP_SIZE)) {
-		PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
+		RTE_PMD_DEBUG_TRACE("Invalid reta size, should be %u aligned\n",
 							RTE_RETA_GROUP_SIZE);
 		return -EINVAL;
 	}
@@ -1905,7 +1906,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		return -EINVAL;
 
 	if (max_rxq == 0) {
-		PMD_DEBUG_TRACE("No receive queue is available\n");
+		RTE_PMD_DEBUG_TRACE("No receive queue is available\n");
 		return -EINVAL;
 	}
 
@@ -1914,7 +1915,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 		shift = i % RTE_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
-			PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
+			RTE_PMD_DEBUG_TRACE("reta_conf[%u]->reta[%u]: %u exceeds "
 				"the maximum rxq index: %u\n", idx, shift,
 				reta_conf[idx].reta[shift], max_rxq);
 			return -EINVAL;
@@ -1932,7 +1933,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	struct rte_eth_dev *dev;
 	int ret;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	/* Check mask bits */
 	ret = rte_eth_check_reta_mask(reta_conf, reta_size);
 	if (ret < 0)
@@ -1946,7 +1947,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id,
 	if (ret < 0)
 		return ret;
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_update, -ENOTSUP);
 	return (*dev->dev_ops->reta_update)(dev, reta_conf, reta_size);
 }
 
@@ -1959,7 +1960,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 	int ret;
 
 	if (port_id >= nb_ports) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
@@ -1969,7 +1970,7 @@ rte_eth_dev_rss_reta_query(uint8_t port_id,
 		return ret;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->reta_query, -ENOTSUP);
 	return (*dev->dev_ops->reta_query)(dev, reta_conf, reta_size);
 }
 
@@ -1979,16 +1980,16 @@ rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf)
 	struct rte_eth_dev *dev;
 	uint16_t rss_hash_protos;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	rss_hash_protos = rss_conf->rss_hf;
 	if ((rss_hash_protos != 0) &&
 	    ((rss_hash_protos & ETH_RSS_PROTO_MASK) == 0)) {
-		PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
+		RTE_PMD_DEBUG_TRACE("Invalid rss_hash_protos=0x%x\n",
 				rss_hash_protos);
 		return -EINVAL;
 	}
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_update, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_update)(dev, rss_conf);
 }
 
@@ -1998,9 +1999,9 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rss_hash_conf_get, -ENOTSUP);
 	return (*dev->dev_ops->rss_hash_conf_get)(dev, rss_conf);
 }
 
@@ -2010,19 +2011,19 @@ rte_eth_dev_udp_tunnel_add(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_add, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_add)(dev, udp_tunnel);
 }
 
@@ -2032,20 +2033,20 @@ rte_eth_dev_udp_tunnel_delete(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
 	if (udp_tunnel == NULL) {
-		PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
+		RTE_PMD_DEBUG_TRACE("Invalid udp_tunnel parameter\n");
 		return -EINVAL;
 	}
 
 	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
-		PMD_DEBUG_TRACE("Invalid tunnel type\n");
+		RTE_PMD_DEBUG_TRACE("Invalid tunnel type\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->udp_tunnel_del, -ENOTSUP);
 	return (*dev->dev_ops->udp_tunnel_del)(dev, udp_tunnel);
 }
 
@@ -2054,9 +2055,9 @@ rte_eth_led_on(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_on, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_on)(dev);
 }
 
@@ -2065,9 +2066,9 @@ rte_eth_led_off(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_led_off, -ENOTSUP);
 	return (*dev->dev_ops->dev_led_off)(dev);
 }
 
@@ -2101,17 +2102,17 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	int index;
 	uint64_t pool_mask;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_add, -ENOTSUP);
 
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
 	if (pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
+		RTE_PMD_DEBUG_TRACE("pool id must be 0-%d\n", ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -2119,7 +2120,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr,
 	if (index < 0) {
 		index = get_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 				port_id);
 			return -ENOSPC;
 		}
@@ -2149,13 +2150,13 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
 	struct rte_eth_dev *dev;
 	int index;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_remove, -ENOTSUP);
 
 	index = get_mac_addr_index(port_id, addr);
 	if (index == 0) {
-		PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot remove default MAC address\n", port_id);
 		return -EADDRINUSE;
 	} else if (index < 0)
 		return 0;  /* Do nothing if address wasn't found */
@@ -2177,13 +2178,13 @@ rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (!is_valid_assigned_ether_addr(addr))
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mac_addr_set, -ENOTSUP);
 
 	/* Update default address in NIC data structure */
 	ether_addr_copy(addr, &dev->data->mac_addrs[0]);
@@ -2201,22 +2202,22 @@ rte_eth_dev_set_vf_rxmode(uint8_t port_id,  uint16_t vf,
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:invalid VF id %d\n", vf);
 		return -EINVAL;
 	}
 
 	if (rx_mode == 0) {
-		PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
+		RTE_PMD_DEBUG_TRACE("set VF RX mode:mode mask ca not be zero\n");
 		return -EINVAL;
 	}
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx_mode, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx_mode)(dev, vf, rx_mode, on);
 }
 
@@ -2251,11 +2252,11 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 	int ret;
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	if (is_zero_ether_addr(addr)) {
-		PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
+		RTE_PMD_DEBUG_TRACE("port %d: Cannot add NULL MAC address\n",
 			port_id);
 		return -EINVAL;
 	}
@@ -2267,20 +2268,20 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr,
 
 	if (index < 0) {
 		if (!on) {
-			PMD_DEBUG_TRACE("port %d: the MAC address was not "
+			RTE_PMD_DEBUG_TRACE("port %d: the MAC address was not "
 				"set in UTA\n", port_id);
 			return -EINVAL;
 		}
 
 		index = get_hash_mac_addr_index(port_id, &null_mac_addr);
 		if (index < 0) {
-			PMD_DEBUG_TRACE("port %d: MAC address array full\n",
+			RTE_PMD_DEBUG_TRACE("port %d: MAC address array full\n",
 					port_id);
 			return -ENOSPC;
 		}
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_hash_table_set, -ENOTSUP);
 	ret = (*dev->dev_ops->uc_hash_table_set)(dev, addr, on);
 	if (ret == 0) {
 		/* Update address in NIC data structure */
@@ -2300,11 +2301,11 @@ rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->uc_all_hash_table_set, -ENOTSUP);
 	return (*dev->dev_ops->uc_all_hash_table_set)(dev, on);
 }
 
@@ -2315,18 +2316,18 @@ rte_eth_dev_set_vf_rx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
+		RTE_PMD_DEBUG_TRACE("port %d: invalid vf id\n", port_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rx)(dev, vf, on);
 }
 
@@ -2337,18 +2338,18 @@ rte_eth_dev_set_vf_tx(uint8_t port_id, uint16_t vf, uint8_t on)
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 
 	num_vfs = dev_info.max_vfs;
 	if (vf > num_vfs) {
-		PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
+		RTE_PMD_DEBUG_TRACE("set pool tx:invalid pool id=%d\n", vf);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_tx, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_tx)(dev, vf, on);
 }
 
@@ -2358,22 +2359,22 @@ rte_eth_dev_set_vf_vlan_filter(uint8_t port_id, uint16_t vlan_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
 	if (vlan_id > ETHER_MAX_VLAN_ID) {
-		PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:invalid VLAN id=%d\n",
 			vlan_id);
 		return -EINVAL;
 	}
 
 	if (vf_mask == 0) {
-		PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
+		RTE_PMD_DEBUG_TRACE("VF VLAN filter:pool_mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_vlan_filter, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_vlan_filter)(dev, vlan_id,
 						   vf_mask, vlan_on);
 }
@@ -2385,26 +2386,26 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx,
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_link link;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (queue_idx > dev_info.max_tx_queues) {
-		PMD_DEBUG_TRACE("set queue rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:port %d: "
 				"invalid queue id=%d\n", port_id, queue_idx);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set queue rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 			tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_queue_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_queue_rate_limit)(dev, queue_idx, tx_rate);
 }
 
@@ -2418,26 +2419,26 @@ int rte_eth_set_vf_rate_limit(uint8_t port_id, uint16_t vf, uint16_t tx_rate,
 	if (q_msk == 0)
 		return 0;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 	rte_eth_dev_info_get(port_id, &dev_info);
 	link = dev->data->dev_link;
 
 	if (vf > dev_info.max_vfs) {
-		PMD_DEBUG_TRACE("set VF rate limit:port %d: "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:port %d: "
 				"invalid vf id=%d\n", port_id, vf);
 		return -EINVAL;
 	}
 
 	if (tx_rate > link.link_speed) {
-		PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
+		RTE_PMD_DEBUG_TRACE("set VF rate limit:invalid tx_rate=%d, "
 				"bigger than link speed= %d\n",
 				tx_rate, link.link_speed);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_vf_rate_limit, -ENOTSUP);
 	return (*dev->dev_ops->set_vf_rate_limit)(dev, vf, tx_rate, q_msk);
 }
 
@@ -2448,14 +2449,14 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	if (mirror_conf->rule_type == 0) {
-		PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("mirror rule type can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
-		PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
+		RTE_PMD_DEBUG_TRACE("Invalid dst pool, pool id must be 0-%d\n",
 				ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
@@ -2463,18 +2464,18 @@ rte_eth_mirror_rule_set(uint8_t port_id,
 	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
 	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
-		PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid mirror pool, pool mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
-		PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
+		RTE_PMD_DEBUG_TRACE("Invalid vlan mask, vlan mask can not be 0.\n");
 		return -EINVAL;
 	}
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_set, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_set)(dev, mirror_conf, rule_id, on);
 }
@@ -2484,10 +2485,10 @@ rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mirror_rule_reset, -ENOTSUP);
 
 	return (*dev->dev_ops->mirror_rule_reset)(dev, rule_id);
 }
@@ -2499,12 +2500,12 @@ rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
@@ -2517,13 +2518,13 @@ rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return 0;
 	}
 	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id],
@@ -2535,10 +2536,10 @@ rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, 0);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_count, 0);
 	return (*dev->dev_ops->rx_queue_count)(dev, queue_id);
 }
 
@@ -2547,10 +2548,10 @@ rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_descriptor_done, -ENOTSUP);
 	return (*dev->dev_ops->rx_descriptor_done)(dev->data->rx_queues[queue_id],
 						   offset);
 }
@@ -2567,7 +2568,7 @@ rte_eth_dev_callback_register(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2607,7 +2608,7 @@ rte_eth_dev_callback_unregister(uint8_t port_id,
 	if (!cb_fn)
 		return -EINVAL;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
 
 	dev = &rte_eth_devices[port_id];
 	rte_spinlock_lock(&rte_eth_dev_cb_lock);
@@ -2670,14 +2671,14 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
@@ -2685,7 +2686,7 @@ rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
 		vec = intr_handle->intr_vec[qid];
 		rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 		if (rc && rc != -EEXIST) {
-			PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+			RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 					" op %d epfd %d vec %u\n",
 					port_id, qid, op, epfd, vec);
 		}
@@ -2728,26 +2729,26 @@ rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
 	int rc;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%u\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%u\n", queue_id);
 		return -EINVAL;
 	}
 
 	intr_handle = &dev->pci_dev->intr_handle;
 	if (!intr_handle->intr_vec) {
-		PMD_DEBUG_TRACE("RX Intr vector unset\n");
+		RTE_PMD_DEBUG_TRACE("RX Intr vector unset\n");
 		return -EPERM;
 	}
 
 	vec = intr_handle->intr_vec[queue_id];
 	rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
 	if (rc && rc != -EEXIST) {
-		PMD_DEBUG_TRACE("p %u q %u rx ctl error"
+		RTE_PMD_DEBUG_TRACE("p %u q %u rx ctl error"
 				" op %d epfd %d vec %u\n",
 				port_id, queue_id, op, epfd, vec);
 		return rc;
@@ -2763,13 +2764,13 @@ rte_eth_dev_rx_intr_enable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_enable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_enable)(dev, queue_id);
 }
 
@@ -2780,13 +2781,13 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
 	return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
 }
 
@@ -2795,10 +2796,10 @@ int rte_eth_dev_bypass_init(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_init, -ENOTSUP);
 	(*dev->dev_ops->bypass_init)(dev);
 	return 0;
 }
@@ -2808,10 +2809,10 @@ rte_eth_dev_bypass_state_show(uint8_t port_id, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_show)(dev, state);
 	return 0;
 }
@@ -2821,10 +2822,10 @@ rte_eth_dev_bypass_state_set(uint8_t port_id, uint32_t *new_state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_state_set)(dev, new_state);
 	return 0;
 }
@@ -2834,10 +2835,10 @@ rte_eth_dev_bypass_event_show(uint8_t port_id, uint32_t event, uint32_t *state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_state_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_show)(dev, event, state);
 	return 0;
 }
@@ -2847,11 +2848,11 @@ rte_eth_dev_bypass_event_store(uint8_t port_id, uint32_t event, uint32_t state)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_event_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_event_set)(dev, event, state);
 	return 0;
 }
@@ -2861,11 +2862,11 @@ rte_eth_dev_wd_timeout_store(uint8_t port_id, uint32_t timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_set, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_set)(dev, timeout);
 	return 0;
 }
@@ -2875,11 +2876,11 @@ rte_eth_dev_bypass_ver_show(uint8_t port_id, uint32_t *ver)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_ver_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_ver_show)(dev, ver);
 	return 0;
 }
@@ -2889,11 +2890,11 @@ rte_eth_dev_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_timeout_show, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_timeout_show)(dev, wd_timeout);
 	return 0;
 }
@@ -2903,11 +2904,11 @@ rte_eth_dev_bypass_wd_reset(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->bypass_wd_reset, -ENOTSUP);
 	(*dev->dev_ops->bypass_wd_reset)(dev);
 	return 0;
 }
@@ -2918,10 +2919,10 @@ rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type,
 				RTE_ETH_FILTER_NOP, NULL);
 }
@@ -2932,10 +2933,10 @@ rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->filter_ctrl, -ENOTSUP);
 	return (*dev->dev_ops->filter_ctrl)(dev, filter_type, filter_op, arg);
 }
 
@@ -3105,18 +3106,18 @@ rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_rx_queues) {
-		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
@@ -3129,18 +3130,18 @@ rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	if (qinfo == NULL)
 		return -EINVAL;
 
 	dev = &rte_eth_devices[port_id];
 	if (queue_id >= dev->data->nb_tx_queues) {
-		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+		RTE_PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
 		return -EINVAL;
 	}
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 	dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
@@ -3154,10 +3155,10 @@ rte_eth_dev_set_mc_addr_list(uint8_t port_id,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_mc_addr_list, -ENOTSUP);
 	return dev->dev_ops->set_mc_addr_list(dev, mc_addr_set, nb_mc_addr);
 }
 
@@ -3166,10 +3167,10 @@ rte_eth_timesync_enable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_enable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_enable)(dev);
 }
 
@@ -3178,10 +3179,10 @@ rte_eth_timesync_disable(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_disable, -ENOTSUP);
 	return (*dev->dev_ops->timesync_disable)(dev);
 }
 
@@ -3191,10 +3192,10 @@ rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp,
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_rx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_rx_timestamp)(dev, timestamp, flags);
 }
 
@@ -3203,10 +3204,10 @@ rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_tx_timestamp, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_tx_timestamp)(dev, timestamp);
 }
 
@@ -3215,10 +3216,10 @@ rte_eth_timesync_adjust_time(uint8_t port_id, int64_t delta)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_adjust_time, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_adjust_time, -ENOTSUP);
 	return (*dev->dev_ops->timesync_adjust_time)(dev, delta);
 }
 
@@ -3227,10 +3228,10 @@ rte_eth_timesync_read_time(uint8_t port_id, struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_time, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_read_time, -ENOTSUP);
 	return (*dev->dev_ops->timesync_read_time)(dev, timestamp);
 }
 
@@ -3239,10 +3240,10 @@ rte_eth_timesync_write_time(uint8_t port_id, const struct timespec *timestamp)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_write_time, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timesync_write_time, -ENOTSUP);
 	return (*dev->dev_ops->timesync_write_time)(dev, timestamp);
 }
 
@@ -3251,10 +3252,10 @@ rte_eth_dev_get_reg_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg_length, -ENOTSUP);
 	return (*dev->dev_ops->get_reg_length)(dev);
 }
 
@@ -3263,10 +3264,10 @@ rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_reg, -ENOTSUP);
 	return (*dev->dev_ops->get_reg)(dev, info);
 }
 
@@ -3275,10 +3276,10 @@ rte_eth_dev_get_eeprom_length(uint8_t port_id)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom_length, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom_length)(dev);
 }
 
@@ -3287,10 +3288,10 @@ rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->get_eeprom)(dev, info);
 }
 
@@ -3299,10 +3300,10 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 {
 	struct rte_eth_dev *dev;
 
-	VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 
 	dev = &rte_eth_devices[port_id];
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
 
@@ -3313,14 +3314,14 @@ rte_eth_dev_get_dcb_info(uint8_t port_id,
 	struct rte_eth_dev *dev;
 
 	if (!rte_eth_dev_is_valid_port(port_id)) {
-		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
 		return -ENODEV;
 	}
 
 	dev = &rte_eth_devices[port_id];
 	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
 
-	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
 	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
 }
 
@@ -3328,7 +3329,7 @@ void
 rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev)
 {
 	if ((eth_dev == NULL) || (pci_dev == NULL)) {
-		PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
+		RTE_PMD_DEBUG_TRACE("NULL pointer eth_dev=%p pci_dev=%p\n",
 				eth_dev, pci_dev);
 		return;
 	}
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 02/10] ethdev: make error checking macros public
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
                                 ` (8 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

Move the function pointer and port id checking macros to rte_ethdev and
rte_dev header files, so that they can be used in the static inline
functions there. Also replace the RTE_LOG call within
RTE_PMD_DEBUG_TRACE so this macro can be built with the -pedantic flag

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>

---
 lib/librte_eal/common/include/rte_dev.h | 53 ++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.c           | 54 ---------------------------------
 lib/librte_ether/rte_ethdev.h           | 26 ++++++++++++++++
 3 files changed, 79 insertions(+), 54 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
index f601d21..f1b5507 100644
--- a/lib/librte_eal/common/include/rte_dev.h
+++ b/lib/librte_eal/common/include/rte_dev.h
@@ -46,8 +46,61 @@
 extern "C" {
 #endif
 
+#include <stdio.h>
 #include <sys/queue.h>
 
+#include <rte_log.h>
+
+__attribute__((format(printf, 2, 0)))
+static inline void
+rte_pmd_debug_trace(const char *func_name, const char *fmt, ...)
+{
+	va_list ap;
+
+	va_start(ap, fmt);
+
+	char buffer[vsnprintf(NULL, 0, fmt, ap) + 1];
+
+	va_end(ap);
+
+	va_start(ap, fmt);
+	vsnprintf(buffer, sizeof(buffer), fmt, ap);
+	va_end(ap);
+
+	rte_log(RTE_LOG_ERR, RTE_LOGTYPE_PMD, "%s: %s", func_name, buffer);
+}
+
+/* Macros for checking for restricting functions to primary instance only */
+#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
+		return; \
+	} \
+} while (0)
+
+/* Macros to check for invalid function pointers */
+#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
+		return; \
+	} \
+} while (0)
+
+
 /** Double linked list of device drivers. */
 TAILQ_HEAD(rte_driver_list, rte_driver);
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 71775dc..f4648ac 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -69,60 +69,6 @@
 #include "rte_ether.h"
 #include "rte_ethdev.h"
 
-#ifdef RTE_LIBRTE_ETHDEV_DEBUG
-#define RTE_PMD_DEBUG_TRACE(fmt, args...) do { \
-		RTE_LOG(ERR, PMD, "%s: " fmt, __func__, ## args); \
-	} while (0)
-#else
-#define RTE_PMD_DEBUG_TRACE(fmt, args...)
-#endif
-
-/* Macros for checking for restricting functions to primary instance only */
-#define RTE_PROC_PRIMARY_OR_ERR_RET(retval) do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_PROC_PRIMARY_OR_RET() do { \
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
-		RTE_PMD_DEBUG_TRACE("Cannot run in secondary processes\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for invalid function pointers in dev_ops structure */
-#define RTE_FUNC_PTR_OR_ERR_RET(func, retval) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return (retval); \
-	} \
-} while (0)
-
-#define RTE_FUNC_PTR_OR_RET(func) do { \
-	if ((func) == NULL) { \
-		RTE_PMD_DEBUG_TRACE("Function not supported\n"); \
-		return; \
-	} \
-} while (0)
-
-/* Macros to check for valid port */
-#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) {  \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return retval; \
-	} \
-} while (0)
-
-#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
-	if (!rte_eth_dev_is_valid_port(port_id)) { \
-		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
-		return; \
-	} \
-} while (0)
-
-
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 static struct rte_eth_dev_data *rte_eth_dev_data;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index e92bf8d..b51b8aa 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -172,6 +172,8 @@ extern "C" {
 
 #include <stdint.h>
 
+#include <rte_dev.h>
+
 /* Use this macro to check if LRO API is supported */
 #define RTE_ETHDEV_HAS_LRO_SUPPORT
 
@@ -931,6 +933,30 @@ struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
 TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+
+/* Macros to check for valid port */
+#define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \
+	if (!rte_eth_dev_is_valid_port(port_id)) { \
+		RTE_PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id); \
+		return; \
+	} \
+} while (0)
+
 /*
  * Definitions of all functions exported by an Ethernet driver through the
  * the generic structure of type *eth_dev_ops* supplied in the *rte_eth_dev*
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 03/10] eal: add __rte_packed /__rte_aligned macros
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 02/10] ethdev: make error checking macros public Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
                                 ` (7 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

Adding a new macro for specifying __aligned__ attribute, and updating the
current __rte_cache_aligned macro to use it.

Also adding a new macro to specify the __packed__ attribute

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 lib/librte_eal/common/include/rte_memory.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 067be10..20feed9 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -78,9 +78,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment
+ */
+#define __rte_aligned(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -106,7 +116,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 04/10] mbuf: add new marcos to get the physical address of data
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (2 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
                                 ` (6 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 lib/librte_mbuf/rte_mbuf.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a93189..6a1c133 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1622,6 +1622,27 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address that points to an offset of the
+ * start of the data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) \
+	(phys_addr_t)((m)->buf_physaddr + (m)->data_off + (o))
+
+/**
+ * A macro that returns the physical address that points to the start of the
+ * data in the mbuf
+ *
+ * @param m
+ *   The packet mbuf.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (3 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
                                 ` (5 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

changes from RFC:
 - Session management API changes to support specification of crypto
   transform(xform) chains using linked list of xforms.
 - Changes to the crypto operation struct as a result of session
   management changes.
 - Some movement of common MACROS shared by cryptodevs and ethdevs to
   common headers

Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                                    |    4 +
 config/common_bsdapp                           |   10 +-
 config/common_linuxapp                         |   10 +-
 doc/api/doxy-api-index.md                      |    1 +
 doc/api/doxy-api.conf                          |    1 +
 lib/Makefile                                   |    1 +
 lib/librte_cryptodev/Makefile                  |   60 ++
 lib/librte_cryptodev/rte_crypto.h              |  610 +++++++++++++
 lib/librte_cryptodev/rte_cryptodev.c           | 1092 ++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h           |  651 ++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  549 ++++++++++++
 lib/librte_cryptodev/rte_cryptodev_version.map |   32 +
 lib/librte_eal/common/include/rte_log.h        |    1 +
 mk/rte.app.mk                                  |    1 +
 14 files changed, 3021 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index d6feada..9138bbb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -196,6 +196,10 @@ M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
 F: scripts/test-null.sh
 
+Crypto API
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_cryptodev
+F: docs/guides/cryptodevs
 
 Drivers
 -------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 7df0763..3bfb3f6 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -150,6 +150,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 52173d5..cd7a2d4 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -148,6 +148,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=n
+CONFIG_RTE_CRYPTO_MAX_DEVS=64
+CONFIG_RTE_CRYPTODEV_NAME_LEN=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 9727b83..4c5c1b4 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..81fa3fc
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_cryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
\ No newline at end of file
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..42343a8
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,610 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_auth_xform* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_CIPHER_AES_GCM,
+	/**< AES algorithm in GCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_AUTH_AES_GCM* element of the
+	 * *rte_crypto_auth_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_auth_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	phys_addr_t phys_addr;
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_xform {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid.
+	 */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both
+	 * keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Authentication / Hash Algorithms */
+enum rte_crypto_auth_algorithm {
+	RTE_CRYPTO_AUTH_NULL = 1,
+	/**< NULL hash algorithm. */
+
+	RTE_CRYPTO_AUTH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_AUTH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto
+	 * operation data structures op_params parameter MUST be set for a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_AUTH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_AUTH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_AUTH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_AUTH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_AUTH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_AUTH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_AUTH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_AUTH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Authentication / Hash Operations */
+enum rte_crypto_auth_operation {
+	RTE_CRYPTO_AUTH_OP_VERIFY,	/**< Verify authentication digest */
+	RTE_CRYPTO_AUTH_OP_GENERATE	/**< Generate authentication digest */
+};
+
+/**
+ * Authentication / Hash transform data.
+ *
+ * This structure contains data relating to an authentication/hash crypto
+ * transforms. The fields op, algo and digest_length are common to all
+ * authentication transforms and MUST be set.
+ */
+struct rte_crypto_auth_xform {
+	enum rte_crypto_auth_operation op;
+	/**< Authentication operation type */
+	enum rte_crypto_auth_algorithm algo;
+	/**< Authentication algorithm selection */
+
+	struct rte_crypto_key key;		/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_AUTH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/** Crypto transformation types */
+enum rte_crypto_xform_type {
+	RTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,	/**< No xform specified */
+	RTE_CRYPTO_XFORM_AUTH,			/**< Authentication xform */
+	RTE_CRYPTO_XFORM_CIPHER			/**< Cipher xform  */
+};
+
+/**
+ * Crypto transform structure.
+ *
+ * This is used to specify the crypto transforms required, multiple transforms
+ * can be chained together to specify a chain transforms such as authentication
+ * then cipher, or cipher then authentication. Each transform structure can
+ * hold a single transform, the type field is used to specify which transform
+ * is contained within the union
+ */
+struct rte_crypto_xform {
+	struct rte_crypto_xform *next; /**< next xform in chain */
+
+	enum rte_crypto_xform_type type; /**< xform type */
+	union {
+		struct rte_crypto_auth_xform auth;
+		/**< Authentication / hash xform */
+		struct rte_crypto_cipher_xform cipher;
+		/**< Cipher xform */
+	};
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/** Status of crypto operation */
+enum rte_crypto_op_status {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	/**< Operation completed successfully */
+	RTE_CRYPTO_OP_STATUS_NO_SUBMITTED,
+	/**< Operation not yet submitted to a cryptodev */
+	RTE_CRYPTO_OP_STATUS_ENQUEUED,
+	/**< Operation is enqueued on device */
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	/**< Operation failed due to invalid arguments in request */
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	/**< Error handling operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op {
+	enum rte_crypto_op_sess_type type;
+	enum rte_crypto_op_status status;
+
+	struct {
+		struct rte_mbuf *m;	/**< Destination mbuf */
+		uint8_t offset;		/**< Data offset */
+	} dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct rte_crypto_xform *xform;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC)
+			  * mode of operation, this field specifies the start
+			  * of the AAD data in the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source
+			  * buffer that the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_AUTH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_auth_xform structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_AUTH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up
+		 * for the session in the @ref rte_crypto_auth_xform structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by
+		 *   the implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   space to round this up to the nearest multiple of the
+		 *   block size (16 bytes).
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_AUTH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+		uint32_t length;	/**< Length of digest */
+	} additional_auth;
+	/**< Additional authentication parameters */
+
+	struct rte_mempool *pool;
+	/**< mempool used to allocate crypto op */
+
+	void *user_data;
+	/**< opaque pointer for user data */
+};
+
+
+/**
+ * Reset the fields of a crypto operation to their default values.
+ *
+ * @param	op	The crypto operation to be reset.
+ */
+static inline void
+__rte_crypto_op_reset(struct rte_crypto_op *op)
+{
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+	op->dst.m = NULL;
+	op->dst.offset = 0;
+}
+
+/** Attach a session to a crypto operation */
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..edd1320
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,1092 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+#include "rte_cryptodev_pmd.h"
+
+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
+
+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= { NULL },
+		.nb_devs		= 0,
+		.max_devs		= RTE_CRYPTO_MAX_DEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;		/**< Callback address */
+	void *cb_arg;				/**< Parameter for callback */
+	enum rte_cryptodev_event_type event;	/**< Interrupt event type */
+	uint32_t active;			/**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if ((strcmp(rte_cryptodev_globals->devs[i].data->name, name)
+				== 0) &&
+				(rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED))
+			return i;
+
+	return -1;
+}
+
+uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached ==
+					RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	return dev->data->socket_id;
+}
+
+static inline int
+rte_cryptodev_data_alloc(uint8_t dev_id, struct rte_cryptodev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_cryptodev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_cryptodev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_cryptodev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached ==
+				RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_CRYPTO_MAX_DEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+	uint8_t dev_id;
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_CRYPTO_MAX_DEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (cryptodev->data == NULL) {
+		struct rte_cryptodev_data *cryptodev_data =
+				cryptodev_globals.data[dev_id];
+
+		int retval = rte_cryptodev_data_alloc(dev_id, &cryptodev_data,
+				socket_id);
+
+		if (retval < 0 || cryptodev_data == NULL)
+			return NULL;
+
+		cryptodev->data = cryptodev_data;
+
+		snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN,
+				"%s", name);
+
+		cryptodev->data->dev_id = dev_id;
+		cryptodev->data->socket_id = socket_id;
+		cryptodev->data->dev_started = 0;
+
+		cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+		cryptodev->pmd_type = type;
+
+		cryptodev_globals.nb_devs++;
+	}
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	int ret;
+
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	ret = rte_cryptodev_close(cryptodev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV,
+			rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket(
+						"cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free crypto device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/*
+	 * Register PCI driver for physical device intialisation during
+	 * PCI probing
+	 */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+
+	return 0;
+}
+
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
+		int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_nb_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u",
+							nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		int ret;
+		uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+
+		qp = dev->data->queue_pairs;
+
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release,
+				-ENOTSUP);
+
+		for (i = nb_qpairs; i < old_nb_queues; i++) {
+			ret = (*dev->dev_ops->queue_pair_release)(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	return rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,
+			config->session_mp.cache_size, config->socket_id);
+}
+
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int retval;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -1;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		CDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* We can't close the device if there are outstanding sessions in use */
+	if (dev->data->session_pool != NULL) {
+		if (!rte_mempool_full(dev->data->session_pool)) {
+			CDEV_LOG_ERR("dev_id=%u close failed, session mempool "
+					"has sessions still in use, free "
+					"all sessions before calling close",
+					(unsigned)dev_id);
+			return -EBUSY;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	retval = (*dev->dev_ops->dev_close)(dev);
+
+	if (retval < 0)
+		return retval;
+
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/*
+	 * This function is only safe when called from the primary process
+	 * in a multi-process setup
+	 */
+	RTE_PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+static void
+rte_crypto_session_init(struct rte_mempool *mp,
+		void *opaque_arg,
+		void *_sess,
+		__rte_unused unsigned i)
+{
+	struct rte_cryptodev_session *sess = _sess;
+	struct rte_cryptodev *dev = opaque_arg;
+
+	memset(sess, 0, mp->elt_size);
+
+	sess->dev_id = dev->data->dev_id;
+	sess->type = dev->dev_type;
+	sess->mp = mp;
+
+	if (dev->dev_ops->session_initialize)
+		(*dev->dev_ops->session_initialize)(mp, sess->_private);
+}
+
+static int
+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id)
+{
+	char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	unsigned priv_sess_size;
+
+	unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
+			dev->data->dev_id);
+	if (n > sizeof(mp_name)) {
+		CDEV_LOG_ERR("Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
+	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+	if (priv_sess_size == 0) {
+		CDEV_LOG_ERR("%s returned and invalid private session size ",
+						dev->data->name);
+		return -ENOMEM;
+	}
+
+	unsigned elt_size = sizeof(struct rte_cryptodev_session) +
+			priv_sess_size;
+
+	dev->data->session_pool = rte_mempool_lookup(mp_name);
+	if (dev->data->session_pool != NULL) {
+		if ((dev->data->session_pool->elt_size != elt_size) ||
+				(dev->data->session_pool->cache_size <
+				obj_cache_size) ||
+				(dev->data->session_pool->size < nb_objs)) {
+
+			CDEV_LOG_ERR("%s mempool already exists with different"
+					" initialization parameters", mp_name);
+			dev->data->session_pool = NULL;
+			return -ENOMEM;
+		}
+	} else {
+		dev->data->session_pool = rte_mempool_create(
+				mp_name, /* mempool name */
+				nb_objs, /* number of elements*/
+				elt_size, /* element size*/
+				obj_cache_size, /* Cache size*/
+				0, /* private data size */
+				NULL, /* obj initialization constructor */
+				NULL, /* obj initialization constructor arg */
+				rte_crypto_session_init, /* obj constructor */
+				dev, /* obj constructor arg */
+				socket_id, /* socket id */
+				0); /* flags */
+
+		if (dev->data->session_pool == NULL) {
+			CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
+			return -ENOMEM;
+		}
+	}
+
+	CDEV_LOG_DEBUG("%s mempool created!", mp_name);
+	return 0;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_session *sess;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Allocate a session structure from the session pool */
+	if (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {
+		CDEV_LOG_ERR("Couldn't get object from session mempool");
+		return NULL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
+	if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
+			NULL) {
+		CDEV_LOG_ERR("dev_id %d failed to configure session details",
+				dev_id);
+
+		/* Return session to mempool */
+		rte_mempool_put(sess->mp, (void *)sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return sess;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	/* Check the session belongs to this device type */
+	if (sess->type != dev->dev_type)
+		return sess;
+
+	/* Let device implementation clear session material */
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
+	dev->dev_ops->session_clear(dev, (void *)sess->_private);
+
+	/* Return session to mempool */
+	rte_mempool_put(sess->mp, (void *)sess);
+
+	return NULL;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..04bade7
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,651 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_dev.h"
+
+#define CRYPTODEV_NAME_NULL_PMD		("cryptodev_null_pmd")
+/**< Null crypto PMD device name */
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
+	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...)					\
+		RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...)				\
+		RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+				dev, __func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n",	\
+				__func__, __LINE__, ## args)		\
+
+#define CDEV_PMD_TRACE(fmt, args...)					\
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n",		\
+				dev, __func__, ## args)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+
+	unsigned max_nb_queue_pairs;
+	/**< Maximum number of queues pairs supported by device. */
+	unsigned max_nb_sessions;
+	/**< Maximum number of sessions supported by device. */
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;	/**< Accumulated time processing operation */
+	uint64_t t_min;		/**< Max time */
+	uint64_t t_max;		/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;
+	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;
+	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;
+	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;
+	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf; /**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+extern int
+rte_cryptodev_get_dev_id(const char *name);
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+extern uint8_t
+rte_cryptodev_count(void);
+
+extern uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+extern int
+rte_cryptodev_socket_id(uint8_t dev_id);
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;
+	/**< Number of queue pairs to configure on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	config		The crypto device configuration structure.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ */
+extern int
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to
+ *				rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered
+ *				callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;
+	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;
+	/**< Driver for this device */
+	struct rte_cryptodev_data *data;
+	/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;
+	/**< Crypto device type */
+	enum pmd_type pmd_type;
+	/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+/**
+ *
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t socket_id;
+	/**< Socket ID where memory is allocated */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	struct rte_mempool *session_pool;
+	/**< Session memory pool */
+	void **queue_pairs;
+	/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;
+	/**< Number of device queue pairs. */
+
+	void *dev_private;
+	/**< PMD-specific private data */
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev *rte_cryptodevs;
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function
+ * until a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ *
+ * Each mbuf in the *pkts* array must have a valid *rte_mbuf_offload* structure
+ * attached which contains a valid crypto operation.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair which packets are
+ *				to be enqueued for processing. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to
+ *				 *rte_cryptodev_configure*.
+ * @param	pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ * The number of packets is 0 if the device hasn't been started.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function contained within it's crypto_op if a
+ * session-based operation is being provisioned. Memory to contain the session
+ * information is allocated from within mempool managed by the cryptodev.
+ *
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session is no longer required.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	xform		Crypto transform chain.
+
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_xform *xform);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   NULL on successful freeing of session.
+ *   Session pointer on failure to free session.
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..d5fbe44
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,549 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(fmt, args...)
+#endif
+
+struct rte_cryptodev_session {
+	struct {
+		uint8_t dev_id;
+		enum rte_cryptodev_type type;
+		struct rte_mempool *mp;
+	} __rte_aligned(8);
+
+	char _private[];
+};
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ *			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;	/**< Device information array */
+	struct rte_cryptodev_data *data[RTE_CRYPTO_MAX_DEVS];
+	/**< Device private data */
+	uint8_t nb_devs;		/**< Number of devices found */
+	uint8_t max_devs;		/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	struct rte_cryptodev *dev;
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_cryptodev_globals->devs[i];
+			i < rte_cryptodev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_CRYPTODEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures.
+ */
+extern struct rte_cryptodev *rte_cryptodevs;
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return
+ * - 0 on success.
+ * - EAGAIN if can't close as device is busy
+ */
+typedef int (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+
+/**
+ * Get the size of a cryptodev session
+ *
+ * @param	dev		Crypto device pointer
+ *
+ * @return
+ *  - On success returns the size of the session structure for device
+ *  - On failure returns 0
+ */
+typedef unsigned (*cryptodev_get_session_private_size_t)(
+		struct rte_cryptodev *dev);
+
+/**
+ * Initialize a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,
+		void *session_private);
+
+/**
+ * Configure a Crypto session on a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	xform		Single or chain of crypto xforms
+ * @param	priv_sess	Pointer to cryptodev's private session structure
+ *
+ * @return
+ *  - Returns private session structure on success.
+ *  - Returns NULL on failure.
+ */
+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		void *session_private);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;
+	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;
+	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;
+	/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;
+	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;
+	/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;
+	/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;
+	/**< Get count of the queue pairs. */
+
+	cryptodev_get_session_private_size_t session_get_size;
+	/**< Return private session. */
+	cryptodev_initialize_session_t session_initialize;
+	/**< Initialization function for private session data */
+	cryptodev_configure_session_t session_configure;
+	/**< Configure a Crypto session. */
+	cryptodev_free_session_t session_clear;
+	/**< Clear a Crypto sessions private data. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
new file mode 100644
index 0000000..ff8e93d
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -0,0 +1,32 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodevs;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_close;
+	rte_cryptodev_count;
+	rte_cryptodev_count_devtype;
+	rte_cryptodev_configure;
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_info_get;
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_callback_process;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_virtual_dev_init;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+	rte_cryptodev_socket_id;
+	rte_cryptodev_start;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_stop;
+	rte_cryptodev_queue_pair_count;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 724efa7..5d382bb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -118,6 +118,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 06/10] mbuf_offload: library to support attaching offloads to a mbuf
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (4 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
                                 ` (4 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching  offloads to mbufs and a mempool
management functions.

This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

---
 MAINTAINERS                                        |   4 +
 config/common_bsdapp                               |   6 +
 config/common_linuxapp                             |   6 +
 doc/api/doxy-api-index.md                          |   1 +
 doc/api/doxy-api.conf                              |   1 +
 lib/Makefile                                       |   1 +
 lib/librte_mbuf/rte_mbuf.h                         |   6 +
 lib/librte_mbuf_offload/Makefile                   |  52 ++++
 lib/librte_mbuf_offload/rte_mbuf_offload.c         | 100 +++++++
 lib/librte_mbuf_offload/rte_mbuf_offload.h         | 302 +++++++++++++++++++++
 .../rte_mbuf_offload_version.map                   |   7 +
 mk/rte.app.mk                                      |   1 +
 12 files changed, 487 insertions(+)
 create mode 100644 lib/librte_mbuf_offload/Makefile
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.c
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload.h
 create mode 100644 lib/librte_mbuf_offload/rte_mbuf_offload_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 9138bbb..dd8be0f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -191,6 +191,10 @@ F: lib/librte_mbuf/
 F: doc/guides/prog_guide/mbuf_lib.rst
 F: app/test/test_mbuf.c
 
+Packet buffer offload
+M: Declan Doherty <declan.doherty@intel.com>
+F: lib/librte_mbuf_offload/
+
 Ethernet API
 M: Thomas Monjalon <thomas.monjalon@6wind.com>
 F: lib/librte_ether/
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 3bfb3f6..e536fdf 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -335,6 +335,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index cd7a2d4..1947ce3 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -343,6 +343,12 @@ CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
 #
+# Compile librte_mbuf_offload
+#
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD=y
+CONFIG_RTE_LIBRTE_MBUF_OFFLOAD_DEBUG=n
+
+#
 # Compile librte_timer
 #
 CONFIG_RTE_LIBRTE_TIMER=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index bdb6130..199cc2c 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -104,6 +104,7 @@ There are many libraries, so their headers may be grouped by topics:
 
 - **containers**:
   [mbuf]               (@ref rte_mbuf.h),
+  [mbuf_offload]       (@ref rte_mbuf_offload.h),
   [ring]               (@ref rte_ring.h),
   [distributor]        (@ref rte_distributor.h),
   [reorder]            (@ref rte_reorder.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 7244b8f..15bba16 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -48,6 +48,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_kvargs \
                           lib/librte_lpm \
                           lib/librte_mbuf \
+                          lib/librte_mbuf_offload \
                           lib/librte_mempool \
                           lib/librte_meter \
                           lib/librte_net \
diff --git a/lib/Makefile b/lib/Makefile
index 4c5c1b4..ef172ea 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += librte_mbuf_offload
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 6a1c133..cb4583d 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -728,6 +728,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque rte_mbuf_offload  structure declarations */
+struct rte_mbuf_offload;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -841,6 +844,9 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+
+	/* Chain of off-load operations to perform on mbuf */
+	struct rte_mbuf_offload *offload_ops;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
diff --git a/lib/librte_mbuf_offload/Makefile b/lib/librte_mbuf_offload/Makefile
new file mode 100644
index 0000000..acdb449
--- /dev/null
+++ b/lib/librte_mbuf_offload/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf_offload.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+EXPORT_MAP := rte_mbuf_offload_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) := rte_mbuf_offload.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)-include := rte_mbuf_offload.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.c b/lib/librte_mbuf_offload/rte_mbuf_offload.c
new file mode 100644
index 0000000..5c0c9dd
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.c
@@ -0,0 +1,100 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+
+#include "rte_mbuf_offload.h"
+
+/** Initialize rte_mbuf_offload structure */
+static void
+rte_pktmbuf_offload_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_mbuf_offload *ol = _op_data;
+
+	memset(_op_data, 0, mp->elt_size);
+
+	ol->type = RTE_PKTMBUF_OL_NOT_SPECIFIED;
+	ol->mp = mp;
+}
+
+
+struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id)
+{
+	struct rte_pktmbuf_offload_pool_private *priv;
+	unsigned elt_size = sizeof(struct rte_mbuf_offload) + priv_size;
+
+
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+
+	if (mp != NULL) {
+		priv = (struct rte_pktmbuf_offload_pool_private *)
+				rte_mempool_get_priv(mp);
+
+		if (priv->offload_priv_size <  priv_size ||
+				mp->elt_size != elt_size ||
+				mp->cache_size < cache_size ||
+				mp->size < size) {
+			mp = NULL;
+			return NULL;
+		}
+		return mp;
+	}
+
+	mp = rte_mempool_create(
+			name,
+			size,
+			elt_size,
+			cache_size,
+			sizeof(struct rte_pktmbuf_offload_pool_private),
+			NULL,
+			NULL,
+			rte_pktmbuf_offload_init,
+			NULL,
+			socket_id,
+			0);
+
+	if (mp == NULL)
+		return NULL;
+
+	priv = (struct rte_pktmbuf_offload_pool_private *)
+			rte_mempool_get_priv(mp);
+
+	priv->offload_priv_size = priv_size;
+	return mp;
+}
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload.h b/lib/librte_mbuf_offload/rte_mbuf_offload.h
new file mode 100644
index 0000000..f52f163
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload.h
@@ -0,0 +1,302 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_OFFLOAD_H_
+#define _RTE_MBUF_OFFLOAD_H_
+
+/**
+ * @file
+ * RTE mbuf offload
+ *
+ * The rte_mbuf_offload library provides the ability to specify a device generic
+ * off-load operation independent of the current Rx/Tx Ethernet offloads
+ * supported within the rte_mbuf structure, and add supports for multiple
+ * off-load operations and offload device types.
+ *
+ * The rte_mbuf_offload specifies the particular off-load operation type, such
+ * as a crypto operation, and provides a container for the operations
+ * parameter's inside the op union. These parameters are then used by the
+ * device which supports that operation to perform the specified offload.
+ *
+ * This library provides an API to create pre-allocated mempool of offload
+ * operations, with supporting allocate and free functions. It also provides
+ * APIs for attaching an offload to a mbuf, as well as an API to retrieve a
+ * specified offload type from an mbuf offload chain.
+ */
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+
+
+/** packet mbuf offload operation types */
+enum rte_mbuf_ol_op_type {
+	RTE_PKTMBUF_OL_NOT_SPECIFIED = 0,
+	/**< Off-load not specified */
+	RTE_PKTMBUF_OL_CRYPTO
+	/**< Crypto offload operation */
+};
+
+/**
+ * Generic packet mbuf offload
+ * This is used to specify a offload operation to be performed on a rte_mbuf.
+ * Multiple offload operations can be chained to the same mbuf, but only a
+ * single offload operation of a particular type can be in the chain
+ */
+struct rte_mbuf_offload {
+	struct rte_mbuf_offload *next;	/**< next offload in chain */
+	struct rte_mbuf *m;		/**< mbuf offload is attached to */
+	struct rte_mempool *mp;		/**< mempool offload allocated from */
+
+	enum rte_mbuf_ol_op_type type;	/**< offload type */
+	union {
+		struct rte_crypto_op crypto;	/**< Crypto operation */
+	} op;
+};
+
+/**< private data structure belonging to packet mbug offload mempool */
+struct rte_pktmbuf_offload_pool_private {
+	uint16_t offload_priv_size;
+	/**< Size of private area in each mbuf_offload. */
+};
+
+
+/**
+ * Creates a mempool of rte_mbuf_offload objects
+ *
+ * @param	name		mempool name
+ * @param	size		number of objects in mempool
+ * @param	cache_size	cache size of objects for each core
+ * @param	priv_size	size of private data to be allocated with each
+ *				rte_mbuf_offload object
+ * @param	socket_id	Socket on which to allocate mempool objects
+ *
+ * @return
+ * - On success returns a valid mempool of rte_mbuf_offload objects
+ * - On failure return NULL
+ */
+extern struct rte_mempool *
+rte_pktmbuf_offload_pool_create(const char *name, unsigned size,
+		unsigned cache_size, uint16_t priv_size, int socket_id);
+
+
+/**
+ * Returns private data size allocated with each rte_mbuf_offload object by
+ * the mempool
+ *
+ * @param	mpool	rte_mbuf_offload mempool
+ *
+ * @return	private data size
+ */
+static inline uint16_t
+__rte_pktmbuf_offload_priv_size(struct rte_mempool *mpool)
+{
+	struct rte_pktmbuf_offload_pool_private *priv =
+			rte_mempool_get_priv(mpool);
+
+	return priv->offload_priv_size;
+}
+
+/**
+ * Get specified off-load operation type from mbuf.
+ *
+ * @param	m		packet mbuf.
+ * @param	type		offload operation type requested.
+ *
+ * @return
+ * - On success retruns rte_mbuf_offload pointer
+ * - On failure returns NULL
+ *
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_get(struct rte_mbuf *m, enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol;
+
+	for (ol = m->offload_ops; ol != NULL; ol = ol->next)
+		if (ol->type == type)
+			return ol;
+
+	return ol;
+}
+
+/**
+ * Attach a rte_mbuf_offload to a mbuf. We only support a single offload of any
+ * one type in our chain of offloads.
+ *
+ * @param	m	packet mbuf.
+ * @param	ol	rte_mbuf_offload strucutre to be attached
+ *
+ * @returns
+ * - On success returns the pointer to the offload we just added
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_attach(struct rte_mbuf *m, struct rte_mbuf_offload *ol)
+{
+	struct rte_mbuf_offload **ol_last;
+
+	for (ol_last = &m->offload_ops;	ol_last[0] != NULL;
+			ol_last = &ol_last[0]->next)
+		if (ol_last[0]->type == ol->type)
+			return NULL;
+
+	ol_last[0] = ol;
+	ol_last[0]->m = m;
+	ol_last[0]->next = NULL;
+
+	return ol_last[0];
+}
+
+
+/** Rearms rte_mbuf_offload default parameters */
+static inline void
+__rte_pktmbuf_offload_reset(struct rte_mbuf_offload *ol,
+		enum rte_mbuf_ol_op_type type)
+{
+	ol->m = NULL;
+	ol->type = type;
+
+	switch (type) {
+	case RTE_PKTMBUF_OL_CRYPTO:
+		__rte_crypto_op_reset(&ol->op.crypto); break;
+	default:
+		break;
+	}
+}
+
+/** Allocate rte_mbuf_offload from mempool */
+static inline struct rte_mbuf_offload *
+__rte_pktmbuf_offload_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_mbuf_offload *)buf;
+}
+
+/**
+ * Allocate a rte_mbuf_offload with a specified operation type from
+ * rte_mbuf_offload mempool
+ *
+ * @param	mpool		rte_mbuf_offload mempool
+ * @param	type		offload operation type
+ *
+ * @returns
+ * - On success returns a valid rte_mbuf_offload structure
+ * - On failure returns NULL
+ */
+static inline struct rte_mbuf_offload *
+rte_pktmbuf_offload_alloc(struct rte_mempool *mpool,
+		enum rte_mbuf_ol_op_type type)
+{
+	struct rte_mbuf_offload *ol = __rte_pktmbuf_offload_raw_alloc(mpool);
+
+	if (ol != NULL)
+		__rte_pktmbuf_offload_reset(ol, type);
+
+	return ol;
+}
+
+/**
+ * free rte_mbuf_offload structure
+ */
+static inline void
+rte_pktmbuf_offload_free(struct rte_mbuf_offload *ol)
+{
+	if (ol != NULL && ol->mp != NULL)
+		rte_mempool_put(ol->mp, ol);
+}
+
+/**
+ * Checks if the private data of a rte_mbuf_offload has enough capacity for
+ * requested size
+ *
+ * @returns
+ * - if sufficient space available returns pointer to start of private data
+ * - if insufficient space returns NULL
+ */
+static inline void *
+__rte_pktmbuf_offload_check_priv_data_size(struct rte_mbuf_offload *ol,
+		uint16_t size)
+{
+	uint16_t priv_size;
+
+	if (likely(ol->mp != NULL)) {
+		priv_size = __rte_pktmbuf_offload_priv_size(ol->mp);
+
+		if (likely(priv_size >= size))
+			return (void *)(ol + 1);
+	}
+	return NULL;
+}
+
+/**
+ * Allocate space for crypto xforms in the private data space of the
+ * rte_mbuf_offload. This also defaults the crypto xform type and configures
+ * the chaining of the xform in the crypto operation
+ *
+ * @return
+ * - On success returns pointer to first crypto xform in crypto operations chain
+ * - On failure returns NULL
+ */
+static inline struct rte_crypto_xform *
+rte_pktmbuf_offload_alloc_crypto_xforms(struct rte_mbuf_offload *ol,
+		unsigned nb_xforms)
+{
+	struct rte_crypto_xform *xform;
+	void *priv_data;
+	uint16_t size;
+
+	size = sizeof(struct rte_crypto_xform) * nb_xforms;
+	priv_data = __rte_pktmbuf_offload_check_priv_data_size(ol, size);
+
+	if (priv_data == NULL)
+		return NULL;
+
+	ol->op.crypto.xform = xform = (struct rte_crypto_xform *)priv_data;
+
+	do {
+		xform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;
+		xform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;
+	} while (xform);
+
+	return ol->op.crypto.xform;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_OFFLOAD_H_ */
diff --git a/lib/librte_mbuf_offload/rte_mbuf_offload_version.map b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
new file mode 100644
index 0000000..3d3b06a
--- /dev/null
+++ b/lib/librte_mbuf_offload/rte_mbuf_offload_version.map
@@ -0,0 +1,7 @@
+DPDK_2.2 {
+	global:
+
+	rte_pktmbuf_offload_pool_create;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5d382bb..2b8ddce 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF_OFFLOAD)   += -lrte_mbuf_offload
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (5 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
                                 ` (3 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.

This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details

This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES_CBC (with 128-bit, 192-bit and 256-bit keys supported)

Hash algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 - Chained mbufs are not supported.
 - Hash only is not supported.
 - Cipher only is not supported.
 - Only in-place is currently supported (destination address is
   the same as source address).
 - Only supports session-oriented API implementation (session-less
   APIs are not supported).

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 config/common_bsdapp                               |  14 +
 config/common_linuxapp                             |  14 +
 doc/guides/cryptodevs/index.rst                    |  38 ++
 doc/guides/cryptodevs/qat.rst                      | 219 ++++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  37 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 174 ++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 +++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 306 +++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 125 +++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 601 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 561 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    | 124 +++++
 drivers/crypto/qat/qat_logs.h                      |  78 +++
 drivers/crypto/qat/qat_qp.c                        | 429 +++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   3 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 137 +++++
 mk/rte.app.mk                                      |   3 +
 21 files changed, 3648 insertions(+)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index e536fdf..3302d3f 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -158,6 +158,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 1947ce3..458b014 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -156,6 +156,20 @@ CONFIG_RTE_CRYPTO_MAX_DEVS=64
 CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..8ac928c
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,38 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+=====================
+
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..1901842
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,219 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+The QAT PMD provides poll mode crypto driver support for **Intel QuickAssist
+Technology DH895xxC** hardware accelerator.
+
+The QAT PMD has been tested on Fedora 21 64-bit with gcc and on the 4.3
+kernel.org Linux kernel.
+
+
+Features
+--------
+
+The QAT PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_SYM_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_SYM_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_SYM_CIPHER_AES256_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+
+Installation
+------------
+
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required. The
+VF devices exposed by this driver will be used by QAT PMD.
+
+If you are running on kernel 4.3 or greater, see instructions for
+`Installation using kernel.org driver`_ below. If you are on a kernel earlier
+than 4.3, see `Installation using 01.org QAT driver`_.
+
+
+Installation using 01.org QAT driver
+------------------------------------
+
+Download the latest QuickAssist Technology Driver from `01.org
+<https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches>`_
+Consult the *Getting Started Guide* at the same URL for further information.
+
+The steps below assume you are:
+
+* Building on a platform with one ``DH895xCC`` device.
+* Using package ``qatmux.l.2.3.0-34.tgz``.
+* On Fedora21 kernel ``3.17.4-301.fc21.x86_64``.
+
+In the BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, for example by running:
+
+* ``./installer.sh uninstall`` in the directory where originally installed.
+
+* or ``rmmod qat_dh895xcc; rmmod intel_qat``.
+
+Build and install the SRIOV-enabled QAT driver::
+
+    mkdir /QAT
+    cd /QAT
+    # copy qatmux.l.2.3.0-34.tgz to this location
+    tar zxof qatmux.l.2.3.0-34.tgz
+
+    export ICP_WITHOUT_IOMMU=1
+    ./installer.sh install QAT1.6 host
+
+You can use ``cat /proc/icp_dh895xcc_dev0/version`` to confirm the driver is correctly installed.
+You can use ``lspci -d:443`` to confirm the bdf of the 32 VF devices are available per ``DH895xCC`` device.
+
+To complete the installation - follow instructions in `Binding the available VFs to the DPDK UIO driver`_.
+
+**Note**: If using a later kernel and the build fails with an error relating to ``strict_stroul`` not being available apply the following patch:
+
+.. code-block:: diff
+
+   /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+   + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+   + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+   + #else
+   #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+   #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+   #else
+   #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+   #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+   #else
+   #define STR_TO_64(str, base, num, endPtr)                                 \
+        do {                                                               \
+              if (str[0] == '-')                                           \
+              {                                                            \
+                   *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+              }else {                                                      \
+                   *(num) = simple_strtoull((str), &(endPtr), (base));      \
+              }                                                            \
+        } while(0)
+   + #endif
+   #endif
+   #endif
+
+
+If the build fails due to missing header files you may need to do following:
+
+* ``sudo yum install zlib-devel``
+* ``sudo yum install openssl-devel``
+
+If the build or install fails due to mismatching kernel sources you may need to do the following:
+
+* ``sudo yum install kernel-headers-`uname -r```
+* ``sudo yum install kernel-src-`uname -r```
+* ``sudo yum install kernel-devel-`uname -r```
+
+
+Installation using kernel.org driver
+------------------------------------
+
+Assuming you are running on at least a 4.3 kernel, you can use the stock kernel.org QAT
+driver to start the QAT hardware.
+
+The steps below assume you are:
+
+* Running DPDK on a platform with one ``DH895xCC`` device.
+* On a kernel at least version 4.3.
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Ensure the QAT driver is loaded on your system, by executing::
+
+    lsmod | grep qat
+
+You should see the following output::
+
+    qat_dh895xcc            5626  0
+    intel_qat              82336  1 qat_dh895xcc
+
+Next, you need to expose the VFs using the sysfs file system.
+
+First find the bdf of the DH895xCC device::
+
+    lspci -d : 435
+
+You should see output similar to::
+
+    03:00.0 Co-processor: Intel Corporation Coleto Creek PCIe Endpoint
+
+Using the sysfs, enable the VFs::
+
+    echo 32 > /sys/bus/pci/drivers/dh895xcc/0000\:03\:00.0/sriov_numvfs
+
+If you get an error, it's likely you're using a QAT kernel driver earlier than kernel 4.3.
+
+To verify that the VFs are available for use - use ``lspci -d:443`` to confirm
+the bdf of the 32 VF devices are available per ``DH895xCC`` device.
+
+To complete the installation - follow instructions in `Binding the available VFs to the DPDK UIO driver`_.
+
+
+Binding the available VFs to the DPDK UIO driver
+------------------------------------------------
+
+The unbind command below assumes ``bdfs`` of ``03:01.00-03:04.07``, if yours are different adjust the unbind command below::
+
+   cd $RTE_SDK
+   modprobe uio
+   insmod ./build/kmod/igb_uio.ko
+
+   for device in $(seq 1 4); do \
+       for fn in $(seq 0 7); do \
+           echo -n 0000:03:0${device}.${fn} > \
+           /sys/bus/pci/devices/0000\:03\:0${device}.${fn}/driver/unbind; \
+       done; \
+   done
+
+   echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id
+
+You can use ``lspci -vvd:443`` to confirm that all devices are now in use by igb_uio kernel driver.
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..f6aecea
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,37 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
\ No newline at end of file
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..47f1c91
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,174 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) \
+			= (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..498ee83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..fbf2b83
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..4d4d8e4
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,306 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..76c08c0
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,125 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *    * Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *    * Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in
+ *      the documentation and/or other materials provided with the
+ *      distribution.
+ *    * Neither the name of Intel Corporation nor the names of its
+ *      contributors may be used to endorse or promote products derived
+ *      from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t add_auth_data_length,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..ceaffb7
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,601 @@
+/*
+ *  This file is provided under a dual BSD/GPLv2 license.  When using or
+ *  redistributing this file, you may do so under either license.
+ *
+ *  GPL LICENSE SUMMARY
+ *  Copyright(c) 2015 Intel Corporation.
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of version 2 of the GNU General Public License as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful, but
+ *  WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  General Public License for more details.
+ *
+ *  Contact Information:
+ *  qat-linux@intel.com
+ *
+ *  BSD LICENSE
+ *  Copyright(c) 2015 Intel Corporation.
+ *  Redistribution and use in source and binary forms, with or without
+ *  modification, are permitted provided that the following conditions
+ *  are met:
+ *
+ *	* Redistributions of source code must retain the above copyright
+ *	  notice, this list of conditions and the following disclaimer.
+ *	* Redistributions in binary form must reproduce the above copyright
+ *	  notice, this list of conditions and the following disclaimer in
+ *	  the documentation and/or other materials provided with the
+ *	  distribution.
+ *	* Neither the name of Intel Corporation nor the names of its
+ *	  contributors may be used to endorse or promote products derived
+ *	  from this software without specific prior written permission.
+ *
+ *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+#include <openssl/aes.h>	/* Needed to calculate pre-compute values */
+
+
+/*
+ * Returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword
+ */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_GALOIS_128_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+		return 16;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(
+			ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+#define HASH_XCBC_PRECOMP_KEY_NUM 3
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		static uint8_t qat_aes_xcbc_key_seed[
+					ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ] = {
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03,
+		};
+
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		int x;
+		AES_KEY enc_key;
+
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ, 16);
+		rte_memcpy(in, qat_aes_xcbc_key_seed,
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+		for (x = 0; x < HASH_XCBC_PRECOMP_KEY_NUM; x++) {
+			if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+				&enc_key) != 0) {
+				rte_free(in -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ));
+				memset(out -
+					(x * ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ),
+					0, ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ);
+				return -EFAULT;
+			}
+			AES_encrypt(in, out, &enc_key);
+			in += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+			out += ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ;
+		}
+		*p_state_len = ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		rte_free(in - x*ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ);
+		return 0;
+	} else if ((hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		uint8_t *in = NULL;
+		uint8_t *out = p_state_buf;
+		AES_KEY enc_key;
+
+		memset(p_state_buf, 0, ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ);
+		in = rte_zmalloc("working mem for key",
+				ICP_QAT_HW_GALOIS_H_SZ, 16);
+		memset(in, 0, ICP_QAT_HW_GALOIS_H_SZ);
+		if (AES_set_encrypt_key(auth_key, auth_keylen << 3,
+			&enc_key) != 0) {
+			return -EFAULT;
+		}
+		AES_encrypt(in, out, &enc_key);
+		*p_state_len = ICP_QAT_HW_GALOIS_H_SZ +
+				ICP_QAT_HW_GALOIS_LEN_A_SZ +
+				ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		rte_free(in);
+		return 0;
+	}
+
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	}
+	rte_memcpy(ipad, auth_key, auth_keylen);
+	rte_memcpy(opad, auth_key, auth_keylen);
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/*
+	 * State len is a multiple of 8, so may be larger than the digest.
+	 * Put the partial hash of opad state_len bytes after state1
+	 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+			uint8_t *cipherkey, uint32_t cipherkeylen,
+			uint8_t *authkey, uint32_t authkeylen,
+			uint32_t add_auth_data_length,
+			uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t proto = ICP_QAT_FW_LA_NO_PROTO; /* no CCM/GCM/Snow3G */
+	uint16_t state1_size = 0;
+	uint16_t state2_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(
+			cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert,
+			cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	/* Do precomputes */
+	if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(XCBC)precompute failed");
+			return -EFAULT;
+		}
+	} else if ((cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) ||
+		(cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64)) {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			cipherkey, cipherkeylen, (uint8_t *)(hash->sha.state1 +
+			ICP_QAT_HW_GALOIS_128_STATE1_SZ), &state2_size)) {
+			PMD_DRV_LOG(ERR, "(GCM)precompute failed");
+			return -EFAULT;
+		}
+		/*
+		 * Write (the length of AAD) into bytes 16-19 of state2
+		 * in big-endian format. This field is 8 bytes
+		 */
+		*(uint32_t *)&(hash->sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ]) =
+			rte_bswap32(add_auth_data_length);
+		proto = ICP_QAT_FW_LA_GCM_PROTO;
+	} else {
+		if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+			authkey, authkeylen, (uint8_t *)(hash->sha.state1),
+			&state1_size)) {
+			PMD_DRV_LOG(ERR, "(SHA)precompute failed");
+			return -EFAULT;
+		}
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	/* Configure the common header protocol flags */
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, proto);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+	hash_cd_ctrl->inner_state1_sz = state1_size;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+		hash_cd_ctrl->inner_state2_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE2_SZ;
+		hash_cd_ctrl->inner_state1_sz =
+				ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_GALOIS_H_SZ +
+						ICP_QAT_HW_GALOIS_LEN_A_SZ +
+						ICP_QAT_HW_GALOIS_E_CTR0_SZ;
+		hash_cd_ctrl->inner_state1_sz = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
+		memset(hash->sha.state1, 0, ICP_QAT_HW_GALOIS_128_STATE1_SZ);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+					>> 3);
+	auth_param->auth_res_sz = digestsize;
+
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+				ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+		PMD_DRV_LOG(ERR, "invalid param, only authenticated "
+				"encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..47b257f
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,561 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_mbuf_offload.h>
+#include <rte_hexdump.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t shift);
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+
+void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
+		void *session)
+{
+	struct qat_session *sess = session;
+	phys_addr_t cd_paddr = sess->cd_paddr;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session) {
+		memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
+
+		sess->cd_paddr = cd_paddr;
+	}
+}
+
+static int
+qat_get_cmd_id(const struct rte_crypto_xform *xform)
+{
+	if (xform->next == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_CIPHER; */
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH && xform->next == NULL)
+		return -1; /* return ICP_QAT_FW_LA_CMD_AUTH; */
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+
+	return -1;
+}
+
+static struct rte_crypto_auth_xform *
+qat_get_auth_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+qat_get_cipher_xform(struct rte_crypto_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+
+void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	struct qat_session *session = session_private;
+
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	int qat_cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get requested QAT command id */
+	qat_cmd_id = qat_get_cmd_id(xform);
+	if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
+		PMD_DRV_LOG(ERR, "Unsupported xform chain requested");
+		goto error_out;
+	}
+	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = qat_get_cipher_xform(xform);
+
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		if (qat_alg_validate_aes_key(cipher_xform->key.length,
+				&session->qat_cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			goto error_out;
+		}
+		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		break;
+	case RTE_CRYPTO_CIPHER_NULL:
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_CCM:
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+				cipher_xform->algo);
+		goto error_out;
+	}
+
+	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+
+	/* Get authentication xform from Crypto xform chain */
+	auth_xform = qat_get_auth_xform(xform);
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GCM:
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		break;
+	case RTE_CRYPTO_AUTH_NULL:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_AES_CCM:
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+	case RTE_CRYPTO_AUTH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				auth_xform->algo);
+		goto error_out;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				auth_xform->algo);
+		goto error_out;
+	}
+
+	if (qat_alg_aead_session_create_content_desc(session,
+		cipher_xform->key.data,
+		cipher_xform->key.length,
+		auth_xform->key.data,
+		auth_xform->key.length,
+		auth_xform->add_auth_data_length,
+		auth_xform->digest_length))
+		goto error_out;
+
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+unsigned qat_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	return RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+}
+
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	register struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	register uint32_t nb_pkts_sent = 0;
+	register struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	register int ret;
+	uint16_t nb_pkts_possible = nb_pkts;
+	register uint8_t *base_addr;
+	register uint32_t tail;
+	int overflow;
+
+	/* read params used a lot in main loop into registers */
+	queue = &(tmp_qp->tx_q);
+	base_addr = (uint8_t *)queue->base_addr;
+	tail = queue->tail;
+
+	/* Find how many can actually fit on the ring */
+	overflow = (rte_atomic16_add_return(&tmp_qp->inflights16, nb_pkts)
+				- queue->max_inflights);
+	if (overflow > 0) {
+		rte_atomic16_sub(&tmp_qp->inflights16, overflow);
+		nb_pkts_possible = nb_pkts - overflow;
+		if (nb_pkts_possible == 0)
+			return 0;
+	}
+
+	while (nb_pkts_sent != nb_pkts_possible) {
+
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			base_addr + tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			goto kick_tail;
+		}
+
+		tail = adf_modulo(tail + queue->msg_size, queue->modulo);
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, tail);
+	queue->tail = tail;
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct rte_mbuf_offload *ol;
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+	resp_msg = (struct icp_qat_fw_comn_resp *)
+			((uint8_t *)queue->base_addr + queue->head);
+
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG &&
+			msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		ol = rte_pktmbuf_offload_get(rx_mbuf, RTE_PKTMBUF_OL_CRYPTO);
+
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+					resp_msg->comn_hdr.comn_status)) {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		} else {
+			ol->op.crypto.status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)
+					((uint8_t *)queue->base_addr +
+							queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int
+qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	register struct icp_qat_fw_la_bulk_req *qat_req;
+
+	ol = rte_pktmbuf_offload_get(mbuf, RTE_PKTMBUF_OL_CRYPTO);
+	if (unlikely(ol == NULL)) {
+		PMD_DRV_LOG(ERR, "No valid crypto off-load operation attached "
+				"to (%p) mbuf.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
+				" requests mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+
+	if (unlikely(ol->op.crypto.session->type != RTE_CRYPTODEV_QAT_PMD)) {
+		PMD_DRV_LOG(ERR, "Session was not created for this device");
+		return -EINVAL;
+	}
+
+	ctx = (struct qat_session *)ol->op.crypto.session->_private;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length =
+			qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr =
+			qat_req->comn_mid.src_data_addr =
+					rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = ol->op.crypto.data.to_cipher.length;
+	cipher_param->cipher_offset = ol->op.crypto.data.to_cipher.offset;
+	if (ol->op.crypto.iv.length &&
+		(ol->op.crypto.iv.length <=
+				sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array,
+				ol->op.crypto.iv.data, ol->op.crypto.iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = ol->op.crypto.iv.phys_addr;
+	}
+	if (ol->op.crypto.digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+				qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = ol->op.crypto.digest.phys_addr;
+	}
+	auth_param->auth_off = ol->op.crypto.data.to_hash.offset;
+	auth_param->auth_len = ol->op.crypto.data.to_hash.length;
+	auth_param->u1.aad_adr = ol->op.crypto.additional_auth.phys_addr;
+
+	/* (GCM) aad length(240 max) will be at this location after precompute */
+	if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+		ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
+		auth_param->u2.aad_sz =
+		ALIGN_POW2_ROUNDUP(ctx->cd.hash.sha.state1[
+					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
+					ICP_QAT_HW_GALOIS_H_SZ + 3], 16);
+	}
+	auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+	rte_hexdump(stdout, "qat_req:", qat_req,
+			sizeof(struct icp_qat_fw_la_bulk_req));
+#endif
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+void qat_crypto_sym_session_init(struct rte_mempool *mp, void *priv_sess)
+{
+	struct qat_session *s = priv_sess;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return 0;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+int qat_dev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = qat_crypto_sym_qp_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *info)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE *
+				ADF_NUM_BUNDLES_PER_DEV;
+
+		info->max_nb_sessions = internals->max_nb_sessions;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..d680364
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,124 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev_pmd.h>
+#include <rte_memzone.h>
+
+/*
+ * This macro rounds up a number to a be a multiple of
+ * the alignment when the alignment is a power of 2
+ */
+#define ALIGN_POW2_ROUNDUP(num, align) \
+	(((num) + (align) - 1) & ~((align) - 1))
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	modulo;
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;
+	/* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+int qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
+	uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+
+extern unsigned
+qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
+
+extern void
+qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
+
+extern void *
+qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform, void *session_private);
+
+extern void
+qat_crypto_sym_clear_session(struct rte_cryptodev *dev, void *session);
+
+
+uint16_t
+qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..a909f63
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..ec5852d
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,429 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2
+/* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static void qat_queue_delete(struct qat_queue *queue);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
+		}
+
+		PMD_DRV_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(queue_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(queue_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		ret = qat_crypto_sym_qp_release(dev, queue_pair_id);
+		if (ret < 0)
+			return ret;
+	}
+
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if (dev->pci_dev->mem_resource[0].addr == NULL) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space "
+				"(UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >=
+			(ADF_NUM_SYM_QPS_PER_BUNDLE *
+					ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device",
+				queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp metadata",
+			sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+		queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		qat_queue_delete(&(qp->tx_q));
+		goto create_err;
+	}
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+int qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp =
+			(struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return 0;
+	}
+
+	/* Don't free memory if there are still responses to be processed */
+	if (rte_atomic16_read(&(qp->inflights16)) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+	dev->data->queue_pairs[queue_pair_id] = NULL;
+	return 0;
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number,
+		queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static void qat_queue_delete(struct qat_queue *queue)
+{
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		PMD_DRV_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			PMD_DRV_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		PMD_DRV_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+			socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
+			queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n",
+					queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size))
+			!= 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	queue->modulo = ADF_RING_SIZE_MODULO(queue->queue_size);
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u modulo %u",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights,
+				queue->modulo);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+							txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..bbaf1c8
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
\ No newline at end of file
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..e500c1e
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,137 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+		/* Crypto related operations */
+		.session_get_size	= qat_crypto_sym_get_session_private_size,
+		.session_configure	= qat_crypto_sym_configure_session,
+		.session_initialize	= qat_crypto_sym_session_init,
+		.session_clear		= qat_crypto_sym_clear_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	struct qat_pmd_private *internals;
+
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+
+	internals = cryptodev->data->dev_private;
+	internals->max_nb_sessions = RTE_QAT_PMD_MAX_NB_SESSIONS;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 2b8ddce..cfcb064 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -150,6 +150,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (6 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
                                 ` (2 subsequent siblings)
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.

This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES_CBC (with 128-bit, 192-bit and 256-bit keys supported)

Authentication algorithms:
  - RTE_CRYPTO_AUTH_SHA1_HMAC
  - RTE_CRYPTO_AUTH_SHA256_HMAC
  - RTE_CRYPTO_AUTH_SHA512_HMAC
  - RTE_CRYPTO_AUTH_AES_XCBC_MAC

Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto operation, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.

Build instructions:
To build DPKD with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.

Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.

ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p

ref 2: https://downloadcenter.intel.com/download/22972

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                                        |   3 +
 config/common_bsdapp                               |   7 +
 config/common_linuxapp                             |   7 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  85 +++
 doc/guides/cryptodevs/index.rst                    |   1 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/aesni_mb/Makefile                   |  63 ++
 drivers/crypto/aesni_mb/aesni_mb_ops.h             | 210 +++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         | 669 +++++++++++++++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c     | 298 +++++++++
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 229 +++++++
 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map  |   3 +
 mk/rte.app.mk                                      |   4 +
 13 files changed, 1580 insertions(+)
 create mode 100644 doc/guides/cryptodevs/aesni_mb.rst
 create mode 100644 drivers/crypto/aesni_mb/Makefile
 create mode 100644 drivers/crypto/aesni_mb/aesni_mb_ops.h
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
 create mode 100644 drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
 create mode 100644 drivers/crypto/aesni_mb/rte_pmd_aesni_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index dd8be0f..a51a660 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -305,6 +305,9 @@ Null PMD
 M: Tetsuya Mukawa <mukawa@igel.co.jp>
 F: drivers/net/null/
 
+Crypto AES-NI Multi-Buffer PMD
+M: Declan Doherty <declan.doherty@intel.com>
+F: driver/crypto/aesni_mb
 
 Packet processing
 -----------------
diff --git a/config/common_bsdapp b/config/common_bsdapp
index 3302d3f..6f0becb 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -171,6 +171,13 @@ CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_MAX_QAT_SESSIONS=200
 
+
+#
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_AESNI_MB_DEBUG=n
+
 #
 # Support NIC bypass logic
 #
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 458b014..ca6adc7 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -169,6 +169,13 @@ CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 #
 CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
 
+# Compile PMD for AESNI backed device
+#
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS=8
+CONFIG_RTE_AESNI_MB_PMD_MAX_NB_SESSIONS=2048
+
 #
 # Support NIC bypass logic
 #
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
new file mode 100644
index 0000000..2ff5c41
--- /dev/null
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -0,0 +1,85 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AESN-NI Multi Buffer Crytpo Poll Mode Driver
+============================================
+
+
+The AESNI MB PMD (**librte_pmd_aesni_mb**) provides poll mode crypto driver
+support for utilising Intel multi buffer library, see the white paper
+`Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
+<https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-paper.html?wapkw=multi+buffer>`_.
+
+The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+AESNI MB PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+*  Not performance tuned.
+
+Installation
+------------
+
+To build DPDK with the AESNI_MB_PMD the user is required to download the mult-
+buffer library from `here <https://downloadcenter.intel.com/download/22972>`_
+and compile it on their user system before building DPDK. When building the
+multi-buffer library it is necessary to have YASM package installed and also
+requires the overriding of YASM path when building, as a path is hard coded in
+the Makefile of the release package.
+
+.. code-block:: console
+
+	make YASM=/usr/bin/yasm
+
+The environmental variable
+AESNI_MULTI_BUFFER_LIB_PATH must be exported with the path where you extracted
+and built the multi buffer library and finally set
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in config/common_linuxapp.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 8ac928c..16a5f4a 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -35,4 +35,5 @@ Crypto Device Drivers
     :maxdepth: 2
     :numbered:
 
+    aesni_mb
     qat
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index f6aecea..d07ee96 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -31,6 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
diff --git a/drivers/crypto/aesni_mb/Makefile b/drivers/crypto/aesni_mb/Makefile
new file mode 100644
index 0000000..3bf83d1
--- /dev/null
+++ b/drivers/crypto/aesni_mb/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_aesni_mb.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_aesni_version.map
+
+# external library include paths
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += rte_aesni_mb_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/aesni_mb/aesni_mb_ops.h b/drivers/crypto/aesni_mb/aesni_mb_ops.h
new file mode 100644
index 0000000..0c119bf
--- /dev/null
+++ b/drivers/crypto/aesni_mb/aesni_mb_ops.h
@@ -0,0 +1,210 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AESNI_MB_OPS_H_
+#define _AESNI_MB_OPS_H_
+
+#ifndef LINUX
+#define LINUX
+#endif
+
+#include <mb_mgr.h>
+#include <aux_funcs.h>
+
+enum aesni_mb_vector_mode {
+	RTE_AESNI_MB_NOT_SUPPORTED = 0,
+	RTE_AESNI_MB_SSE,
+	RTE_AESNI_MB_AVX,
+	RTE_AESNI_MB_AVX2
+};
+
+typedef void (*md5_one_block_t)(void *data, void *digest);
+
+typedef void (*sha1_one_block_t)(void *data, void *digest);
+typedef void (*sha224_one_block_t)(void *data, void *digest);
+typedef void (*sha256_one_block_t)(void *data, void *digest);
+typedef void (*sha384_one_block_t)(void *data, void *digest);
+typedef void (*sha512_one_block_t)(void *data, void *digest);
+
+typedef void (*aes_keyexp_128_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_192_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+typedef void (*aes_keyexp_256_t)
+		(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+typedef void (*aes_xcbc_expand_key_t)
+		(void *key, void *exp_k1, void *k2, void *k3);
+
+/** Multi-buffer library function pointer table */
+struct aesni_mb_ops {
+	struct {
+		init_mb_mgr_t init_mgr;
+		/**< Initialise scheduler  */
+		get_next_job_t get_next;
+		/**< Get next free job structure */
+		submit_job_t submit;
+		/**< Submit job to scheduler */
+		get_completed_job_t get_completed_job;
+		/**< Get completed job */
+		flush_job_t flush_job;
+		/**< flush jobs from manager */
+	} job;
+	/**< multi buffer manager functions */
+
+	struct {
+		struct {
+			md5_one_block_t md5;
+			/**< MD5 one block hash */
+			sha1_one_block_t sha1;
+			/**< SHA1 one block hash */
+			sha224_one_block_t sha224;
+			/**< SHA224 one block hash */
+			sha256_one_block_t sha256;
+			/**< SHA256 one block hash */
+			sha384_one_block_t sha384;
+			/**< SHA384 one block hash */
+			sha512_one_block_t sha512;
+			/**< SHA512 one block hash */
+		} one_block;
+		/**< one block hash functions */
+
+		struct {
+			aes_keyexp_128_t aes128;
+			/**< AES128 key expansions */
+			aes_keyexp_192_t aes192;
+			/**< AES192 key expansions */
+			aes_keyexp_256_t aes256;
+			/**< AES256 key expansions */
+
+			aes_xcbc_expand_key_t aes_xcbc;
+			/**< AES XCBC key expansions */
+		} keyexp;
+		/**< Key expansion functions */
+	} aux;
+	/**< Auxiliary functions */
+};
+
+
+static const struct aesni_mb_ops job_ops[] = {
+		[RTE_AESNI_MB_NOT_SUPPORTED] = {
+			.job = {
+				NULL
+			},
+			.aux = {
+				.one_block = {
+					NULL
+				},
+				.keyexp = {
+					NULL
+				}
+			}
+		},
+		[RTE_AESNI_MB_SSE] = {
+			.job = {
+				init_mb_mgr_sse,
+				get_next_job_sse,
+				submit_job_sse,
+				get_completed_job_sse,
+				flush_job_sse
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_sse,
+					sha1_one_block_sse,
+					sha224_one_block_sse,
+					sha256_one_block_sse,
+					sha384_one_block_sse,
+					sha512_one_block_sse
+				},
+				.keyexp = {
+					aes_keyexp_128_sse,
+					aes_keyexp_192_sse,
+					aes_keyexp_256_sse,
+					aes_xcbc_expand_key_sse
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX] = {
+			.job = {
+				init_mb_mgr_avx,
+				get_next_job_avx,
+				submit_job_avx,
+				get_completed_job_avx,
+				flush_job_avx
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx,
+					sha1_one_block_avx,
+					sha224_one_block_avx,
+					sha256_one_block_avx,
+					sha384_one_block_avx,
+					sha512_one_block_avx
+				},
+				.keyexp = {
+					aes_keyexp_128_avx,
+					aes_keyexp_192_avx,
+					aes_keyexp_256_avx,
+					aes_xcbc_expand_key_avx
+				}
+			}
+		},
+		[RTE_AESNI_MB_AVX2] = {
+			.job = {
+				init_mb_mgr_avx2,
+				get_next_job_avx2,
+				submit_job_avx2,
+				get_completed_job_avx2,
+				flush_job_avx2
+			},
+			.aux = {
+				.one_block = {
+					md5_one_block_avx2,
+					sha1_one_block_avx2,
+					sha224_one_block_avx2,
+					sha256_one_block_avx2,
+					sha384_one_block_avx2,
+					sha512_one_block_avx2
+				},
+				.keyexp = {
+					aes_keyexp_128_avx2,
+					aes_keyexp_192_avx2,
+					aes_keyexp_256_avx2,
+					aes_xcbc_expand_key_avx2
+				}
+			}
+		}
+};
+
+
+#endif /* _AESNI_MB_OPS_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
new file mode 100644
index 0000000..d8ccf05
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -0,0 +1,669 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/**
+ * Global static parameter used to create a unique name for each AES-NI multi
+ * buffer crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_AESNI_MB_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+typedef void (*hash_one_block_t)(void *data, void *digest);
+typedef void (*aes_keyexp_t)(void *key, void *enc_exp_keys, void *dec_exp_keys);
+
+/**
+ * Calculate the authentication pre-computes
+ *
+ * @param one_block_hash	Function pointer to calculate digest on ipad/opad
+ * @param ipad			Inner pad output byte array
+ * @param opad			Outer pad output byte array
+ * @param hkey			Authentication key
+ * @param hkey_len		Authentication key length
+ * @param blocksize		Block size of selected hash algo
+ */
+static void
+calculate_auth_precomputes(hash_one_block_t one_block_hash,
+		uint8_t *ipad, uint8_t *opad,
+		uint8_t *hkey, uint16_t hkey_len,
+		uint16_t blocksize)
+{
+	unsigned i, length;
+
+	uint8_t ipad_buf[blocksize] __rte_aligned(16);
+	uint8_t opad_buf[blocksize] __rte_aligned(16);
+
+	/* Setup inner and outer pads */
+	memset(ipad_buf, HMAC_IPAD_VALUE, blocksize);
+	memset(opad_buf, HMAC_OPAD_VALUE, blocksize);
+
+	/* XOR hash key with inner and outer pads */
+	length = hkey_len > blocksize ? blocksize : hkey_len;
+
+	for (i = 0; i < length; i++) {
+		ipad_buf[i] ^= hkey[i];
+		opad_buf[i] ^= hkey[i];
+	}
+
+	/* Compute partial hashes */
+	(*one_block_hash)(ipad_buf, ipad);
+	(*one_block_hash)(opad_buf, opad);
+
+	/* Clean up stack */
+	memset(ipad_buf, 0, blocksize);
+	memset(opad_buf, 0, blocksize);
+}
+
+/** Get xform chain order */
+static int
+aesni_mb_get_chain_order(const struct rte_crypto_xform *xform)
+{
+	/*
+	 * Multi-buffer only supports HASH_CIPHER or CIPHER_HASH chained
+	 * operations, all other options are invalid, so we must have exactly
+	 * 2 xform structs chained together
+	 */
+	if (xform->next == NULL || xform->next->next != NULL)
+		return -1;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+		return HASH_CIPHER;
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER &&
+				xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+		return CIPHER_HASH;
+
+	return -1;
+}
+
+/** Set session authentication parameters */
+static int
+aesni_mb_set_session_auth_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	hash_one_block_t hash_oneblock_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_AUTH) {
+		MB_LOG_ERR("Crypto xform struct not of type auth");
+		return -1;
+	}
+
+	/* Set Authentication Parameters */
+	if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC) {
+		sess->auth.algo = AES_XCBC;
+		(*mb_ops->aux.keyexp.aes_xcbc)(xform->auth.key.data,
+				sess->auth.xcbc.k1_expanded,
+				sess->auth.xcbc.k2, sess->auth.xcbc.k3);
+		return 0;
+	}
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		sess->auth.algo = MD5;
+		hash_oneblock_fn = mb_ops->aux.one_block.md5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		sess->auth.algo = SHA1;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		sess->auth.algo = SHA_224;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		sess->auth.algo = SHA_256;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		sess->auth.algo = SHA_384;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.algo = SHA_512;
+		hash_oneblock_fn = mb_ops->aux.one_block.sha512;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported authentication algorithm selection");
+		return -1;
+	}
+
+	/* Calculate Authentication precomputes */
+	calculate_auth_precomputes(hash_oneblock_fn,
+			sess->auth.pads.inner, sess->auth.pads.outer,
+			xform->auth.key.data,
+			xform->auth.key.length,
+			get_auth_algo_blocksize(sess->auth.algo));
+
+	return 0;
+}
+
+/** Set session cipher parameters */
+static int
+aesni_mb_set_session_cipher_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	aes_keyexp_t aes_keyexp_fn;
+
+	if (xform->type != RTE_CRYPTO_XFORM_CIPHER) {
+		MB_LOG_ERR("Crypto xform struct not of type cipher");
+		return -1;
+	}
+
+	/* Select cipher direction */
+	switch (xform->cipher.op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		sess->cipher.direction = ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		sess->cipher.direction = DECRYPT;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher operation parameter");
+		return -1;
+	}
+
+	/* Select cipher mode */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		sess->cipher.mode = CBC;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher mode parameter");
+		return -1;
+	}
+
+	/* Check key length and choose key expansion function */
+	switch (xform->cipher.key.length) {
+	case AES_128_BYTES:
+		sess->cipher.key_length_in_bytes = AES_128_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes128;
+		break;
+	case AES_192_BYTES:
+		sess->cipher.key_length_in_bytes = AES_192_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes192;
+		break;
+	case AES_256_BYTES:
+		sess->cipher.key_length_in_bytes = AES_256_BYTES;
+		aes_keyexp_fn = mb_ops->aux.keyexp.aes256;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported cipher key length");
+		return -1;
+	}
+
+	/* Expanded cipher keys */
+	(*aes_keyexp_fn)(xform->cipher.key.data,
+			sess->cipher.expanded_aes_keys.encode,
+			sess->cipher.expanded_aes_keys.decode);
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	switch (aesni_mb_get_chain_order(xform)) {
+	case HASH_CIPHER:
+		sess->chain_order = HASH_CIPHER;
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case CIPHER_HASH:
+		sess->chain_order = CIPHER_HASH;
+		auth_xform = xform->next;
+		cipher_xform = xform;
+		break;
+	default:
+		MB_LOG_ERR("Unsupported operation chain order parameter");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
+		MB_LOG_ERR("Invalid/unsupported authentication parameters");
+		return -1;
+	}
+
+	if (aesni_mb_set_session_cipher_parameters(mb_ops, sess,
+			cipher_xform)) {
+		MB_LOG_ERR("Invalid/unsupported cipher parameters");
+		return -1;
+	}
+	return 0;
+}
+
+/** Get multi buffer session */
+static struct aesni_mb_session *
+get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct aesni_mb_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_AESNI_MB_PMD))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct aesni_mb_session *)c_sess->_private;
+
+		if (unlikely(aesni_mb_set_session_parameters(qp->ops,
+				sess, crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/**
+ * Process a crypto operation and complete a JOB_AES_HMAC job structure for
+ * submission to the multi buffer library for processing.
+ *
+ * @param	qp	queue pair
+ * @param	job	JOB_AES_HMAC structure to fill
+ * @param	m	mbuf to process
+ *
+ * @return
+ * - Completed JOB_AES_HMAC structure pointer on success
+ * - NULL pointer if completion of JOB_AES_HMAC structure isn't possible
+ */
+static JOB_AES_HMAC *
+process_crypto_op(struct aesni_mb_qp *qp, struct rte_mbuf *m,
+		struct rte_crypto_op *c_op, struct aesni_mb_session *session)
+{
+	JOB_AES_HMAC *job;
+
+	job = (*qp->ops->job.get_next)(&qp->mb_mgr);
+	if (unlikely(job == NULL))
+		return job;
+
+	/* Set crypto operation */
+	job->chain_order = session->chain_order;
+
+	/* Set cipher parameters */
+	job->cipher_direction = session->cipher.direction;
+	job->cipher_mode = session->cipher.mode;
+
+	job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes;
+	job->aes_enc_key_expanded = session->cipher.expanded_aes_keys.encode;
+	job->aes_dec_key_expanded = session->cipher.expanded_aes_keys.decode;
+
+
+	/* Set authentication parameters */
+	job->hash_alg = session->auth.algo;
+	if (job->hash_alg == AES_XCBC) {
+		job->_k1_expanded = session->auth.xcbc.k1_expanded;
+		job->_k2 = session->auth.xcbc.k2;
+		job->_k3 = session->auth.xcbc.k3;
+	} else {
+		job->hashed_auth_key_xor_ipad = session->auth.pads.inner;
+		job->hashed_auth_key_xor_opad = session->auth.pads.outer;
+	}
+
+	/* Mutable crypto operation parameters */
+
+	/* Set digest output location */
+	if (job->cipher_direction == DECRYPT) {
+		job->auth_tag_output = (uint8_t *)rte_pktmbuf_append(m,
+				get_digest_byte_length(job->hash_alg));
+
+		if (job->auth_tag_output)
+			memset(job->auth_tag_output, 0,
+				sizeof(get_digest_byte_length(job->hash_alg)));
+		else
+			return NULL;
+	} else {
+		job->auth_tag_output = c_op->digest.data;
+	}
+
+	/*
+	 * Multiple buffer library current only support returning a truncated
+	 * digest length as specified in the relevant IPsec RFCs
+	 */
+	job->auth_tag_output_len_in_bytes =
+			get_truncated_digest_byte_length(job->hash_alg);
+
+	/* Set IV parameters */
+	job->iv = c_op->iv.data;
+	job->iv_len_in_bytes = c_op->iv.length;
+
+	/* Data  Parameter */
+	job->src = rte_pktmbuf_mtod(m, uint8_t *);
+	job->dst = c_op->dst.m ?
+			rte_pktmbuf_mtod(c_op->dst.m, uint8_t *) +
+			c_op->dst.offset :
+			rte_pktmbuf_mtod(m, uint8_t *) +
+			c_op->data.to_cipher.offset;
+
+	job->cipher_start_src_offset_in_bytes = c_op->data.to_cipher.offset;
+	job->msg_len_to_cipher_in_bytes = c_op->data.to_cipher.length;
+
+	job->hash_start_src_offset_in_bytes = c_op->data.to_hash.offset;
+	job->msg_len_to_hash_in_bytes = c_op->data.to_hash.length;
+
+	/* Set user data to be crypto operation data struct */
+	job->user_data = m;
+	job->user_data2 = c_op;
+
+	return job;
+}
+
+/**
+ * Process a completed job and return rte_mbuf which job processed
+ *
+ * @param job	JOB_AES_HMAC job to process
+ *
+ * @return
+ * - Returns processed mbuf which is trimmed of output digest used in
+ * verification of supplied digest in the case of a HASH_CIPHER operation
+ * - Returns NULL on invalid job
+ */
+static struct rte_mbuf *
+post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m;
+	struct rte_crypto_op *c_op;
+
+	if (job->user_data == NULL)
+		return NULL;
+
+	/* handled retrieved job */
+	m = (struct rte_mbuf *)job->user_data;
+	c_op = (struct rte_crypto_op *)job->user_data2;
+
+	/* set status as successful by default */
+	c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	/* check if job has been processed  */
+	if (unlikely(job->status != STS_COMPLETED)) {
+		c_op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		return m;
+	} else if (job->chain_order == HASH_CIPHER) {
+		/* Verify digest if required */
+		if (memcmp(job->auth_tag_output, c_op->digest.data,
+				job->auth_tag_output_len_in_bytes) != 0)
+			c_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+		/* trim area used for digest from mbuf */
+		rte_pktmbuf_trim(m, get_digest_byte_length(job->hash_alg));
+	}
+
+	/* Free session if a session-less crypto op */
+	if (c_op->type == RTE_CRYPTO_OP_SESSIONLESS) {
+		rte_mempool_put(qp->sess_mp, c_op->session);
+		c_op->session = NULL;
+	}
+
+	return m;
+}
+
+/**
+ * Process a completed JOB_AES_HMAC job and keep processing jobs until
+ * get_completed_job return NULL
+ *
+ * @param qp		Queue Pair to process
+ * @param job		JOB_AES_HMAC job
+ *
+ * @return
+ * - Number of processed jobs
+ */
+static unsigned
+handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+{
+	struct rte_mbuf *m = NULL;
+	unsigned processed_jobs = 0;
+
+	while (job) {
+		processed_jobs++;
+		m = post_process_mb_job(qp, job);
+		if (m)
+			rte_ring_enqueue(qp->processed_pkts, (void *)m);
+		else
+			qp->qp_stats.dequeue_err_count++;
+
+		job = (*qp->ops->job.get_completed_job)(&qp->mb_mgr);
+	}
+
+	return processed_jobs;
+}
+
+static uint16_t
+aesni_mb_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_mbuf_offload *ol;
+
+	struct aesni_mb_session *sess;
+	struct aesni_mb_qp *qp = queue_pair;
+
+	JOB_AES_HMAC *job = NULL;
+
+	int i, processed_jobs = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		ol = rte_pktmbuf_offload_get(bufs[i], RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		sess = get_session(qp, &ol->op.crypto);
+		if (unlikely(sess == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		job = process_crypto_op(qp, bufs[i], &ol->op.crypto, sess);
+		if (unlikely(job == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			goto flush_jobs;
+		}
+
+		/* Submit Job */
+		job = (*qp->ops->job.submit)(&qp->mb_mgr);
+
+		/*
+		 * If submit returns a processed job then handle it,
+		 * before submitting subsequent jobs
+		 */
+		if (job)
+			processed_jobs += handle_completed_jobs(qp, job);
+	}
+
+	if (processed_jobs == 0)
+		goto flush_jobs;
+	else
+		qp->qp_stats.enqueued_count += processed_jobs;
+		return i;
+
+flush_jobs:
+	/*
+	 * If we haven't processed any jobs in submit loop, then flush jobs
+	 * queue to stop the output stalling
+	 */
+	job = (*qp->ops->job.flush_job)(&qp->mb_mgr);
+	if (job)
+		qp->qp_stats.enqueued_count += handle_completed_jobs(qp, job);
+
+	return i;
+}
+
+static uint16_t
+aesni_mb_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct aesni_mb_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+
+static int cryptodev_aesni_mb_uninit(const char *name);
+
+static int
+cryptodev_aesni_mb_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct aesni_mb_private *internals;
+	enum aesni_mb_vector_mode vector_mode;
+
+	/* Check CPU for support for AES instruction set */
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) {
+		MB_LOG_ERR("AES instructions not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_MB_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_MB_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_MB_SSE;
+	else {
+		MB_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		MB_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct aesni_mb_private), socket_id);
+	if (dev == NULL) {
+		MB_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	dev->dev_ops = rte_aesni_mb_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = aesni_mb_pmd_dequeue_burst;
+	dev->enqueue_burst = aesni_mb_pmd_enqueue_burst;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->vector_mode = vector_mode;
+	internals->max_nb_queue_pairs = RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS;
+	internals->max_nb_sessions = RTE_AESNI_MB_PMD_MAX_NB_SESSIONS;
+
+	return dev->data->dev_id;
+init_error:
+	MB_LOG_ERR("driver %s: cryptodev_aesni_create failed", name);
+
+	cryptodev_aesni_mb_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_aesni_mb_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_aesni_mb_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_aesni_mb_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing AESNI crypto device %s on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_aesni_mb_pmd_drv = {
+	.name = CRYPTODEV_NAME_AESNI_MB_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_aesni_mb_init,
+	.uninit = cryptodev_aesni_mb_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_aesni_mb_pmd_drv);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
new file mode 100644
index 0000000..96d22f6
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -0,0 +1,298 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_aesni_mb_pmd_private.h"
+
+/** Configure device */
+static int
+aesni_mb_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+aesni_mb_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+aesni_mb_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+aesni_mb_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+aesni_mb_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+aesni_mb_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct aesni_mb_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->max_nb_sessions = internals->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+aesni_mb_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+aesni_mb_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct aesni_mb_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"aesni_mb_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+aesni_mb_pmd_qp_create_processed_pkts_ring(struct aesni_mb_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			MB_LOG_INFO("Reusing existing ring %s for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		MB_LOG_ERR("Unable to reuse existing ring %s for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct aesni_mb_qp *qp = NULL;
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		aesni_mb_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("AES-NI PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (aesni_mb_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->ops = &job_ops[internals->vector_mode];
+
+	qp->processed_pkts = aesni_mb_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	/* Initialise multi-buffer manager */
+	(*qp->ops->job.init_mgr)(&qp->mb_mgr);
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+aesni_mb_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+aesni_mb_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+aesni_mb_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the aesni multi-buffer session structure */
+static unsigned
+aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct aesni_mb_session);
+}
+
+/** Configure a aesni multi-buffer session from a crypto xform chain */
+static void *
+aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	struct aesni_mb_private *internals = dev->data->dev_private;
+
+	if (unlikely(sess == NULL)) {
+		MB_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
+			sess, xform) != 0) {
+		MB_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+aesni_mb_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
+}
+
+struct rte_cryptodev_ops aesni_mb_pmd_ops = {
+		.dev_configure		= aesni_mb_pmd_config,
+		.dev_start		= aesni_mb_pmd_start,
+		.dev_stop		= aesni_mb_pmd_stop,
+		.dev_close		= aesni_mb_pmd_close,
+
+		.stats_get		= aesni_mb_pmd_stats_get,
+		.stats_reset		= aesni_mb_pmd_stats_reset,
+
+		.dev_infos_get		= aesni_mb_pmd_info_get,
+
+		.queue_pair_setup	= aesni_mb_pmd_qp_setup,
+		.queue_pair_release	= aesni_mb_pmd_qp_release,
+		.queue_pair_start	= aesni_mb_pmd_qp_start,
+		.queue_pair_stop	= aesni_mb_pmd_qp_stop,
+		.queue_pair_count	= aesni_mb_pmd_qp_count,
+
+		.session_get_size	= aesni_mb_pmd_session_get_size,
+		.session_configure	= aesni_mb_pmd_session_configure,
+		.session_clear		= aesni_mb_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
new file mode 100644
index 0000000..2f98609
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -0,0 +1,229 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_AESNI_MB_PMD_PRIVATE_H_
+#define _RTE_AESNI_MB_PMD_PRIVATE_H_
+
+#include "aesni_mb_ops.h"
+
+#define MB_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_AESNI_MB_DEBUG
+#define MB_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+
+#define MB_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_AESNI_MB_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define MB_LOG_INFO(fmt, args...)
+#define MB_LOG_DBG(fmt, args...)
+#endif
+
+#define HMAC_IPAD_VALUE			(0x36)
+#define HMAC_OPAD_VALUE			(0x5C)
+
+static const unsigned auth_blocksize[] = {
+		[MD5]		= 64,
+		[SHA1]		= 64,
+		[SHA_224]	= 64,
+		[SHA_256]	= 64,
+		[SHA_384]	= 128,
+		[SHA_512]	= 128,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the blocksize in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_auth_algo_blocksize(JOB_HASH_ALG algo)
+{
+	return auth_blocksize[algo];
+}
+
+static const unsigned auth_truncated_digest_byte_lengths[] = {
+		[MD5]		= 12,
+		[SHA1]		= 12,
+		[SHA_224]	= 14,
+		[SHA_256]	= 16,
+		[SHA_384]	= 24,
+		[SHA_512]	= 32,
+		[AES_XCBC]	= 12,
+};
+
+/**
+ * Get the IPsec specified truncated length in bytes of the HMAC digest for a
+ * specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_truncated_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_truncated_digest_byte_lengths[algo];
+}
+
+static const unsigned auth_digest_byte_lengths[] = {
+		[MD5]		= 16,
+		[SHA1]		= 20,
+		[SHA_224]	= 28,
+		[SHA_256]	= 32,
+		[SHA_384]	= 48,
+		[SHA_512]	= 64,
+		[AES_XCBC]	= 16,
+};
+
+/**
+ * Get the output digest size in bytes for a specified authentication algorithm
+ *
+ * @Note: this function will not return a valid value for a non-valid
+ * authentication algorithm
+ */
+static inline unsigned
+get_digest_byte_length(JOB_HASH_ALG algo)
+{
+	return auth_digest_byte_lengths[algo];
+}
+
+
+/** private data structure for each virtual AESNI device */
+struct aesni_mb_private {
+	enum aesni_mb_vector_mode vector_mode;
+	/**< CPU vector instruction set mode */
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** AESNI Multi buffer queue pair */
+struct aesni_mb_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	const struct aesni_mb_ops *ops;
+	/**< Vector mode dependent pointer table of the multi-buffer APIs */
+	MB_MGR mb_mgr;
+	/**< Multi-buffer instance */
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+
+/** AES-NI multi-buffer private session structure */
+struct aesni_mb_session {
+	JOB_CHAIN_ORDER chain_order;
+
+	/** Cipher Parameters */
+	struct {
+		/** Cipher direction - encrypt / decrypt */
+		JOB_CIPHER_DIRECTION direction;
+		/** Cipher mode - CBC / Counter */
+		JOB_CIPHER_MODE mode;
+
+		uint64_t key_length_in_bytes;
+
+		struct {
+			uint32_t encode[60] __rte_aligned(16);
+			/**< encode key */
+			uint32_t decode[60] __rte_aligned(16);
+			/**< decode key */
+		} expanded_aes_keys;
+		/**< Expanded AES keys - Allocating space to
+		 * contain the maximum expanded key size which
+		 * is 240 bytes for 256 bit AES, calculate by:
+		 * ((key size (bytes)) *
+		 * ((number of rounds) + 1))
+		 */
+	} cipher;
+
+	/** Authentication Parameters */
+	struct {
+		JOB_HASH_ALG algo; /**< Authentication Algorithm */
+		union {
+			struct {
+				uint8_t inner[128] __rte_aligned(16);
+				/**< inner pad */
+				uint8_t outer[128] __rte_aligned(16);
+				/**< outer pad */
+			} pads;
+			/**< HMAC Authentication pads -
+			 * allocating space for the maximum pad
+			 * size supported which is 128 bytes for
+			 * SHA512
+			 */
+
+			struct {
+			    uint32_t k1_expanded[44] __rte_aligned(16);
+			    /**< k1 (expanded key). */
+			    uint8_t k2[16] __rte_aligned(16);
+			    /**< k2. */
+			    uint8_t k3[16] __rte_aligned(16);
+			    /**< k3. */
+			} xcbc;
+			/**< Expanded XCBC authentication keys */
+		};
+	} auth;
+} __rte_cache_aligned;
+
+
+/**
+ *
+ */
+extern int
+aesni_mb_set_session_parameters(const struct aesni_mb_ops *mb_ops,
+		struct aesni_mb_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops;
+
+
+
+#endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
new file mode 100644
index 0000000..ad607bb
--- /dev/null
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_version.map
@@ -0,0 +1,3 @@
+DPDK_2.2 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cfcb064..4a660e6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -153,6 +153,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 # QAT PMD has a dependency on libcrypto (from openssl) for calculating HMAC precomputes
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 
+# AESNI MULTI BUFFER is dependent on the IPSec_MB library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 09/10] app/test: add cryptodev unit and performance tests
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (7 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 10/10] l2fwd-crypto: crypto Declan Doherty
  2015-11-25 17:44               ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Thomas Monjalon
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

unit tests are run by using cryptodev_qat_autotest or
cryptodev_aesni_autotest from the test apps interactive console.

performance tests are run by using the cryptodev_qat_perftest or
cryptodev_aesni_mb_perftest command from the test apps interactive
console.

If you which to run the tests on a QAT device there must be one
bound to igb_uio kernel driver.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>

Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                          |    2 +
 app/test/Makefile                    |    4 +
 app/test/test.c                      |   92 +-
 app/test/test.h                      |   34 +-
 app/test/test_cryptodev.c            | 1986 ++++++++++++++++++++++++++++++++
 app/test/test_cryptodev.h            |   68 ++
 app/test/test_cryptodev_perf.c       | 2062 ++++++++++++++++++++++++++++++++++
 app/test/test_link_bonding.c         |    6 +-
 app/test/test_link_bonding_mode4.c   |    7 +-
 app/test/test_link_bonding_rssconf.c |    7 +-
 10 files changed, 4219 insertions(+), 49 deletions(-)
 create mode 100644 app/test/test_cryptodev.c
 create mode 100644 app/test/test_cryptodev.h
 create mode 100644 app/test/test_cryptodev_perf.c

diff --git a/MAINTAINERS b/MAINTAINERS
index a51a660..74aa169 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -204,6 +204,8 @@ Crypto API
 M: Declan Doherty <declan.doherty@intel.com>
 F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
+F: app/test/test_cryptodev.c
+F: app/test/test_cryptodev_perf.c
 
 Drivers
 -------
diff --git a/app/test/Makefile b/app/test/Makefile
index de63235..ec33e1a 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -149,6 +149,10 @@ endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test.c b/app/test/test.c
index b94199a..f35b304 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -159,51 +159,81 @@ main(int argc, char **argv)
 int
 unit_test_suite_runner(struct unit_test_suite *suite)
 {
-	int retval, i = 0;
+	int test_success;
+	unsigned total = 0, executed = 0, skipped = 0, succeeded = 0, failed = 0;
 
 	if (suite->suite_name)
-		printf("Test Suite : %s\n", suite->suite_name);
+		printf(" + ------------------------------------------------------- +\n");
+		printf(" + Test Suite : %s\n", suite->suite_name);
 
 	if (suite->setup)
 		if (suite->setup() != 0)
-			return -1;
-
-	while (suite->unit_test_cases[i].testcase) {
-		/* Run test case setup */
-		if (suite->unit_test_cases[i].setup) {
-			retval = suite->unit_test_cases[i].setup();
-			if (retval != 0)
-				return retval;
-		}
+			goto suite_summary;
 
-		/* Run test case */
-		if (suite->unit_test_cases[i].testcase() == 0) {
-			printf("TestCase %2d: %s\n", i,
-					suite->unit_test_cases[i].success_msg ?
-					suite->unit_test_cases[i].success_msg :
-					"passed");
-		}
-		else {
-			printf("TestCase %2d: %s\n", i, suite->unit_test_cases[i].fail_msg ?
-					suite->unit_test_cases[i].fail_msg :
-					"failed");
-			return -1;
+	printf(" + ------------------------------------------------------- +\n");
+
+	while (suite->unit_test_cases[total].testcase) {
+		if (!suite->unit_test_cases[total].enabled) {
+			skipped++;
+			total++;
+			continue;
+		} else {
+			executed++;
 		}
 
-		/* Run test case teardown */
-		if (suite->unit_test_cases[i].teardown) {
-			retval = suite->unit_test_cases[i].teardown();
-			if (retval != 0)
-				return retval;
+		/* run test case setup */
+		if (suite->unit_test_cases[total].setup)
+			test_success = suite->unit_test_cases[total].setup();
+		else
+			test_success = TEST_SUCCESS;
+
+		if (test_success == TEST_SUCCESS) {
+			/* run the test case */
+			test_success = suite->unit_test_cases[total].testcase();
+			if (test_success == TEST_SUCCESS)
+				succeeded++;
+			else
+				failed++;
+		} else {
+			failed++;
 		}
 
-		i++;
+		/* run the test case teardown */
+		if (suite->unit_test_cases[total].teardown)
+			suite->unit_test_cases[total].teardown();
+
+		if (test_success == TEST_SUCCESS)
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].success_msg ?
+					suite->unit_test_cases[total].success_msg :
+					"passed");
+		else
+			printf(" + TestCase [%2d] : %s\n", total,
+					suite->unit_test_cases[total].fail_msg ?
+					suite->unit_test_cases[total].fail_msg :
+					"failed");
+
+		total++;
 	}
 
 	/* Run test suite teardown */
 	if (suite->teardown)
-		if (suite->teardown() != 0)
-			return -1;
+		suite->teardown();
+
+	goto suite_summary;
+
+suite_summary:
+	printf(" + ------------------------------------------------------- +\n");
+	printf(" + Test Suite Summary \n");
+	printf(" + Tests Total :       %2d\n", total);
+	printf(" + Tests Skipped :     %2d\n", skipped);
+	printf(" + Tests Executed :    %2d\n", executed);
+	printf(" + Tests Passed :      %2d\n", succeeded);
+	printf(" + Tests Failed :      %2d\n", failed);
+	printf(" + ------------------------------------------------------- +\n");
+
+	if (failed)
+		return -1;
 
 	return 0;
 }
diff --git a/app/test/test.h b/app/test/test.h
index 62eb51d..a2fba60 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -33,7 +33,7 @@
 
 #ifndef _TEST_H_
 #define _TEST_H_
-
+#include <stddef.h>
 #include <sys/queue.h>
 
 #define TEST_SUCCESS  (0)
@@ -64,6 +64,17 @@
 		}                                                        \
 } while (0)
 
+
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len,  msg, ...) do {	\
+	if (memcmp(a, b, len)) {                                        \
+		printf("TestCase %s() line %d failed: "              \
+			msg "\n", __func__, __LINE__, ##__VA_ARGS__);    \
+		TEST_TRACE_FAILURE(__FILE__, __LINE__, __func__);    \
+		return TEST_FAILED;                                  \
+	}                                                        \
+} while (0)
+
+
 #define TEST_ASSERT_NOT_EQUAL(a, b, msg, ...) do {               \
 		if (!(a != b)) {                                         \
 			printf("TestCase %s() line %d failed: "              \
@@ -113,27 +124,36 @@
 
 struct unit_test_case {
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	int (*testcase)(void);
 	const char *success_msg;
 	const char *fail_msg;
+	unsigned enabled;
 };
 
-#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed"}
+#define TEST_CASE(fn) { NULL, NULL, fn, #fn " succeeded", #fn " failed", 1 }
 
 #define TEST_CASE_NAMED(name, fn) { NULL, NULL, fn, name " succeeded", \
-		name " failed"}
+		name " failed", 1 }
 
 #define TEST_CASE_ST(setup, teardown, testcase)         \
 		{ setup, teardown, testcase, #testcase " succeeded",    \
-		#testcase " failed "}
+		#testcase " failed ", 1 }
+
+
+#define TEST_CASE_DISABLED(fn) { NULL, NULL, fn, #fn " succeeded", \
+	#fn " failed", 0 }
+
+#define TEST_CASE_ST_DISABLED(setup, teardown, testcase)         \
+		{ setup, teardown, testcase, #testcase " succeeded",    \
+		#testcase " failed ", 0 }
 
-#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL }
+#define TEST_CASES_END() { NULL, NULL, NULL, NULL, NULL, 0 }
 
 struct unit_test_suite {
 	const char *suite_name;
 	int (*setup)(void);
-	int (*teardown)(void);
+	void (*teardown)(void);
 	struct unit_test_case unit_test_cases[];
 };
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
new file mode 100644
index 0000000..fd5b7ec
--- /dev/null
+++ b/app/test/test_cryptodev.c
@@ -0,0 +1,1986 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mbuf_offload.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+static enum rte_cryptodev_type gbl_cryptodev_type;
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_ol_pool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_mbuf_offload *ol;
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf;
+
+	uint8_t *digest;
+};
+
+/*
+ * Forward declarations.
+ */
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_param);
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	memset(m->buf_addr, 0, m->buf_len);
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+
+	return m;
+}
+
+#if HEX_DUMP
+static void
+hexdump_mbuf_data(FILE *f, const char *title, struct rte_mbuf *m)
+{
+	rte_hexdump(f, title, rte_pktmbuf_mtod(m, const void *), m->data_len);
+}
+#endif
+
+static struct rte_mbuf *
+process_crypto_request(uint8_t dev_id, struct rte_mbuf *ibuf)
+{
+	struct rte_mbuf *obuf = NULL;
+#if HEX_DUMP
+	hexdump_mbuf_data(stdout, "Enqueued Packet", ibuf);
+#endif
+
+	if (rte_cryptodev_enqueue_burst(dev_id, 0, &ibuf, 1) != 1) {
+		printf("Error sending packet for encryption");
+		return NULL;
+	}
+	while (rte_cryptodev_dequeue_burst(dev_id, 0, &obuf, 1) == 0)
+		rte_pause();
+
+#if HEX_DUMP
+	if (obuf)
+		hexdump_mbuf_data(stdout, "Dequeued Packet", obuf);
+#endif
+
+	return obuf;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, dev_id = 0;
+	uint16_t qp_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	if (ts_params->mbuf_pool == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+				"CRYPTO_MBUFPOOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+				rte_socket_id());
+		if (ts_params->mbuf_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"MBUF_OFFLOAD_POOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS * sizeof(struct rte_crypto_xform),
+			rte_socket_id());
+	if (ts_params->mbuf_ol_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of"
+					" pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Create list of valid crypto devs */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_type)
+			ts_params->valid_devs[ts_params->valid_dev_count++] = i;
+	}
+
+	if (ts_params->valid_dev_count < 1)
+		return TEST_FAILED;
+
+	/* Set up all the qps on the first of the valid devices found */
+	for (i = 0; i < 1; i++) {
+		dev_id = ts_params->valid_devs[i];
+
+		rte_cryptodev_info_get(dev_id, &info);
+
+		/*
+		 * Since we can't free and re-allocate queue memory always set
+		 * the queues on this device up to max size first so enough
+		 * memory is allocated for any later re-configures needed by
+		 * other tests
+		 */
+
+		ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+		ts_params->conf.socket_id = SOCKET_ID_ANY;
+		ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+				&ts_params->conf),
+				"Failed to configure cryptodev %u with %u qps",
+				dev_id, ts_params->conf.nb_queue_pairs);
+
+		ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+					dev_id, qp_id, &ts_params->qp_conf,
+					rte_cryptodev_socket_id(dev_id)),
+					"Failed to setup queue pair %u on "
+					"cryptodev %u",
+					qp_id, dev_id);
+		}
+	}
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_pool));
+	}
+
+
+	if (ts_params->mbuf_ol_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_ol_pool));
+	}
+
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	uint16_t qp_id;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs =
+			(gbl_cryptodev_type == RTE_CRYPTODEV_QAT_PMD) ?
+					DEFAULT_NUM_OPS_INFLIGHT :
+					DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	/*
+	 * Now reconfigure queues to size we actually want to use in this
+	 * test suite.
+	 */
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0], qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+	}
+
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	/* free crypto session structure */
+	if (ut_params->sess) {
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				ut_params->sess);
+		ut_params->sess = NULL;
+	}
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	/*
+	 * free mbuf - both obuf and ibuf are usually the same,
+	 * but rte copes even if we call free twice
+	 */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+				rte_mempool_count(ts_params->mbuf_pool));
+
+	rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+static int
+test_device_configure_invalid_dev_id(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint16_t dev_id, num_devs = 0;
+
+	TEST_ASSERT((num_devs = rte_cryptodev_count()) >= 1,
+			"Need at least %d devices for test", 1);
+
+	/* valid dev_id values */
+	dev_id = ts_params->valid_devs[ts_params->valid_dev_count - 1];
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	/* invalid dev_id values */
+	dev_id = num_devs;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure: "
+			"invalid dev_num %u", dev_id);
+
+	dev_id = 0xff;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+			"Failed test for rte_cryptodev_configure:"
+			"invalid dev_num %u", dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_device_configure_invalid_queue_pair_ids(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+	/* valid - one queue pairs */
+	ts_params->conf.nb_queue_pairs = 1;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* valid - max value queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed to configure cryptodev: dev_id %u, qp_id %u",
+			ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - zero queue pairs */
+	ts_params->conf.nb_queue_pairs = 0;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value supported by field queue pairs */
+	ts_params->conf.nb_queue_pairs = UINT16_MAX;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+
+	/* invalid - max value + 1 queue pairs */
+	ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
+
+	TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf),
+			"Failed test for rte_cryptodev_configure, dev_id %u,"
+			" invalid qps: %u",
+			ts_params->valid_devs[0],
+			ts_params->conf.nb_queue_pairs);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_queue_pair_descriptor_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_qp_conf qp_conf = {
+		.nb_descriptors = MAX_NUM_OPS_INFLIGHT
+	};
+
+	uint16_t qp_id;
+
+	/* Stop the device in case it's started so it can be configured */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	ts_params->conf.session_mp.nb_objs = dev_info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
+			&ts_params->conf), "Failed to configure cryptodev %u",
+			ts_params->valid_devs[0]);
+
+
+	/*
+	 * Test various ring sizes on this device. memzones can't be
+	 * freed so are re-used if ring is released and re-created.
+	 */
+	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights "
+				"%u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = (uint32_t)(MAX_NUM_OPS_INFLIGHT / 2);
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT; /* valid */
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for "
+				"rte_cryptodev_queue_pair_setup: num_inflights"
+				" %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 2 */
+	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT + 2;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max value of parameter */
+	qp_conf.nb_descriptors = UINT32_MAX-1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Failed test for"
+				" rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* invalid number of descriptors - max supported + 1 */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT + 1;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
+		TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+				ts_params->valid_devs[0], qp_id, &qp_conf,
+				rte_cryptodev_socket_id(
+						ts_params->valid_devs[0])),
+				"Unexpectedly passed test for "
+				"rte_cryptodev_queue_pair_setup:"
+				"num_inflights %u on qp %u on cryptodev %u",
+				qp_conf.nb_descriptors, qp_id,
+				ts_params->valid_devs[0]);
+	}
+
+	/* test invalid queue pair id */
+	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;	/*valid */
+
+	qp_id = DEFAULT_NUM_QPS_PER_QAT_DEVICE;		/*invalid */
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	qp_id = 0xffff; /*invalid*/
+
+	TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
+			ts_params->valid_devs[0],
+			qp_id, &qp_conf,
+			rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+			"Failed test for rte_cryptodev_queue_pair_setup:"
+			"invalid qp %u on cryptodev %u",
+			qp_id, ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+/* ***** Plaintext data for tests ***** */
+
+const char catch_22_quote_1[] =
+		"There was only one catch and that was Catch-22, which "
+		"specified that a concern for one's safety in the face of "
+		"dangers that were real and immediate was the process of a "
+		"rational mind. Orr was crazy and could be grounded. All he "
+		"had to do was ask; and as soon as he did, he would no longer "
+		"be crazy and would have to fly more missions. Orr would be "
+		"crazy to fly more missions and sane if he didn't, but if he "
+		"was sane he had to fly them. If he flew them he was crazy "
+		"and didn't have to; but if he didn't want to he was sane and "
+		"had to. Yossarian was moved very deeply by the absolute "
+		"simplicity of this clause of Catch-22 and let out a "
+		"respectful whistle. \"That's some catch, that Catch-22\", he "
+		"observed. \"It's the best there is,\" Doc Daneeka agreed.";
+
+const char catch_22_quote[] =
+		"What a lousy earth! He wondered how many people were "
+		"destitute that same night even in his own prosperous country, "
+		"how many homes were shanties, how many husbands were drunk "
+		"and wives socked, and how many children were bullied, abused, "
+		"or abandoned. How many families hungered for food they could "
+		"not afford to buy? How many hearts were broken? How many "
+		"suicides would take place that same night, how many people "
+		"would go insane? How many cockroaches and landlords would "
+		"triumph? How many winners were losers, successes failures, "
+		"and rich men poor men? How many wise guys were stupid? How "
+		"many happy endings were unhappy endings? How many honest men "
+		"were liars, brave men cowards, loyal men traitors, how many "
+		"sainted men were corrupt, how many people in positions of "
+		"trust had sold their souls to bodyguards, how many had never "
+		"had souls? How many straight-and-narrow paths were crooked "
+		"paths? How many best families were worst families and how "
+		"many good people were bad people? When you added them all up "
+		"and then subtracted, you might be left with only the children, "
+		"and perhaps with Albert Einstein and an old violinist or "
+		"sculptor somewhere.";
+
+#define QUOTE_480_BYTES		(480)
+#define QUOTE_512_BYTES		(512)
+#define QUOTE_768_BYTES		(768)
+#define QUOTE_1024_BYTES	(1024)
+
+
+
+/* ***** SHA1 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA1	(DIGEST_BYTE_LENGTH_SHA1)
+
+static uint8_t hmac_sha1_key[] = {
+	0xF8, 0x2A, 0xC7, 0x54, 0xDB, 0x96, 0x18, 0xAA,
+	0xC3, 0xA1, 0x53, 0xF6, 0x1F, 0x17, 0x60, 0xBD,
+	0xDE, 0xF4, 0xDE, 0xAD };
+
+/* ***** SHA224 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA224	(DIGEST_BYTE_LENGTH_SHA224)
+
+
+/* ***** AES-CBC Cipher Tests ***** */
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+static uint8_t aes_cbc_key[] = {
+	0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+	0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A };
+
+static uint8_t aes_cbc_iv[] = {
+	0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+	0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
+
+
+/* ***** AES-CBC / HMAC-SHA1 Hash Tests ***** */
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_ciphertext[] = {
+	0x8B, 0X4D, 0XDA, 0X1B, 0XCF, 0X04, 0XA0, 0X31,
+	0XB4, 0XBF, 0XBD, 0X68, 0X43, 0X20, 0X7E, 0X76,
+	0XB1, 0X96, 0X8B, 0XA2, 0X7C, 0XA2, 0X83, 0X9E,
+	0X39, 0X5A, 0X2F, 0X7E, 0X92, 0XB4, 0X48, 0X1A,
+	0X3F, 0X6B, 0X5D, 0XDF, 0X52, 0X85, 0X5F, 0X8E,
+	0X42, 0X3C, 0XFB, 0XE9, 0X1A, 0X24, 0XD6, 0X08,
+	0XDD, 0XFD, 0X16, 0XFB, 0XE9, 0X55, 0XEF, 0XF0,
+	0XA0, 0X8D, 0X13, 0XAB, 0X81, 0XC6, 0X90, 0X01,
+	0XB5, 0X18, 0X84, 0XB3, 0XF6, 0XE6, 0X11, 0X57,
+	0XD6, 0X71, 0XC6, 0X3C, 0X3F, 0X2F, 0X33, 0XEE,
+	0X24, 0X42, 0X6E, 0XAC, 0X0B, 0XCA, 0XEC, 0XF9,
+	0X84, 0XF8, 0X22, 0XAA, 0X60, 0XF0, 0X32, 0XA9,
+	0X75, 0X75, 0X3B, 0XCB, 0X70, 0X21, 0X0A, 0X8D,
+	0X0F, 0XE0, 0XC4, 0X78, 0X2B, 0XF8, 0X97, 0XE3,
+	0XE4, 0X26, 0X4B, 0X29, 0XDA, 0X88, 0XCD, 0X46,
+	0XEC, 0XAA, 0XF9, 0X7F, 0XF1, 0X15, 0XEA, 0XC3,
+	0X87, 0XE6, 0X31, 0XF2, 0XCF, 0XDE, 0X4D, 0X80,
+	0X70, 0X91, 0X7E, 0X0C, 0XF7, 0X26, 0X3A, 0X92,
+	0X4F, 0X18, 0X83, 0XC0, 0X8F, 0X59, 0X01, 0XA5,
+	0X88, 0XD1, 0XDB, 0X26, 0X71, 0X27, 0X16, 0XF5,
+	0XEE, 0X10, 0X82, 0XAC, 0X68, 0X26, 0X9B, 0XE2,
+	0X6D, 0XD8, 0X9A, 0X80, 0XDF, 0X04, 0X31, 0XD5,
+	0XF1, 0X35, 0X5C, 0X3B, 0XDD, 0X9A, 0X65, 0XBA,
+	0X58, 0X34, 0X85, 0X61, 0X1C, 0X42, 0X10, 0X76,
+	0X73, 0X02, 0X42, 0XC9, 0X23, 0X18, 0X8E, 0XB4,
+	0X6F, 0XB4, 0XA3, 0X54, 0X6E, 0X88, 0X3B, 0X62,
+	0X7C, 0X02, 0X8D, 0X4C, 0X9F, 0XC8, 0X45, 0XF4,
+	0XC9, 0XDE, 0X4F, 0XEB, 0X22, 0X83, 0X1B, 0XE4,
+	0X49, 0X37, 0XE4, 0XAD, 0XE7, 0XCD, 0X21, 0X54,
+	0XBC, 0X1C, 0XC2, 0X04, 0X97, 0XB4, 0X10, 0X61,
+	0XF0, 0XE4, 0XEF, 0X27, 0X63, 0X3A, 0XDA, 0X91,
+	0X41, 0X25, 0X62, 0X1C, 0X5C, 0XB6, 0X38, 0X4A,
+	0X88, 0X71, 0X59, 0X5A, 0X8D, 0XA0, 0X09, 0XAF,
+	0X72, 0X94, 0XD7, 0X79, 0X5C, 0X60, 0X7C, 0X8F,
+	0X4C, 0XF5, 0XD9, 0XA1, 0X39, 0X6D, 0X81, 0X28,
+	0XEF, 0X13, 0X28, 0XDF, 0XF5, 0X3E, 0XF7, 0X8E,
+	0X09, 0X9C, 0X78, 0X18, 0X79, 0XB8, 0X68, 0XD7,
+	0XA8, 0X29, 0X62, 0XAD, 0XDE, 0XE1, 0X61, 0X76,
+	0X1B, 0X05, 0X16, 0XCD, 0XBF, 0X02, 0X8E, 0XA6,
+	0X43, 0X6E, 0X92, 0X55, 0X4F, 0X60, 0X9C, 0X03,
+	0XB8, 0X4F, 0XA3, 0X02, 0XAC, 0XA8, 0XA7, 0X0C,
+	0X1E, 0XB5, 0X6B, 0XF8, 0XC8, 0X4D, 0XDE, 0XD2,
+	0XB0, 0X29, 0X6E, 0X40, 0XE6, 0XD6, 0XC9, 0XE6,
+	0XB9, 0X0F, 0XB6, 0X63, 0XF5, 0XAA, 0X2B, 0X96,
+	0XA7, 0X16, 0XAC, 0X4E, 0X0A, 0X33, 0X1C, 0XA6,
+	0XE6, 0XBD, 0X8A, 0XCF, 0X40, 0XA9, 0XB2, 0XFA,
+	0X63, 0X27, 0XFD, 0X9B, 0XD9, 0XFC, 0XD5, 0X87,
+	0X8D, 0X4C, 0XB6, 0XA4, 0XCB, 0XE7, 0X74, 0X55,
+	0XF4, 0XFB, 0X41, 0X25, 0XB5, 0X4B, 0X0A, 0X1B,
+	0XB1, 0XD6, 0XB7, 0XD9, 0X47, 0X2A, 0XC3, 0X98,
+	0X6A, 0XC4, 0X03, 0X73, 0X1F, 0X93, 0X6E, 0X53,
+	0X19, 0X25, 0X64, 0X15, 0X83, 0XF9, 0X73, 0X2A,
+	0X74, 0XB4, 0X93, 0X69, 0XC4, 0X72, 0XFC, 0X26,
+	0XA2, 0X9F, 0X43, 0X45, 0XDD, 0XB9, 0XEF, 0X36,
+	0XC8, 0X3A, 0XCD, 0X99, 0X9B, 0X54, 0X1A, 0X36,
+	0XC1, 0X59, 0XF8, 0X98, 0XA8, 0XCC, 0X28, 0X0D,
+	0X73, 0X4C, 0XEE, 0X98, 0XCB, 0X7C, 0X58, 0X7E,
+	0X20, 0X75, 0X1E, 0XB7, 0XC9, 0XF8, 0XF2, 0X0E,
+	0X63, 0X9E, 0X05, 0X78, 0X1A, 0XB6, 0XA8, 0X7A,
+	0XF9, 0X98, 0X6A, 0XA6, 0X46, 0X84, 0X2E, 0XF6,
+	0X4B, 0XDC, 0X9B, 0X8F, 0X9B, 0X8F, 0XEE, 0XB4,
+	0XAA, 0X3F, 0XEE, 0XC0, 0X37, 0X27, 0X76, 0XC7,
+	0X95, 0XBB, 0X26, 0X74, 0X69, 0X12, 0X7F, 0XF1,
+	0XBB, 0XFF, 0XAE, 0XB5, 0X99, 0X6E, 0XCB, 0X0C
+};
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest[] = {
+	0x9a, 0X4f, 0X88, 0X1b, 0Xb6, 0X8f, 0Xd8, 0X60,
+	0X42, 0X1a, 0X7d, 0X3d, 0Xf5, 0X82, 0X80, 0Xf1,
+	0X18, 0X8c, 0X1d, 0X32 };
+
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+	TEST_ASSERT_NOT_NULL(rte_pktmbuf_offload_alloc_crypto_xforms(
+			ut_params->ol, 2),
+			"failed to allocate space for crypto transforms");
+
+	/* Set crypto operation data parameters */
+	ut_params->op->xform->type = RTE_CRYPTO_XFORM_CIPHER;
+
+	/* cipher parameters */
+	ut_params->op->xform->cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->op->xform->cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->op->xform->cipher.key.data = aes_cbc_key;
+	ut_params->op->xform->cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* hash parameters */
+	ut_params->op->xform->next->type = RTE_CRYPTO_XFORM_AUTH;
+
+	ut_params->op->xform->next->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->op->xform->next->auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->op->xform->next->auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->op->xform->next->auth.key.data = hmac_sha1_key;
+	ut_params->op->xform->next->auth.digest_length =
+			DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
+					DIGEST_BYTE_LENGTH_SHA1,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA1_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA1);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
+			DIGEST_BYTE_LENGTH_SHA1);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA1;
+	ut_params->auth_xform.auth.key.data = hmac_sha1_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA1;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-CBC / HMAC-SHA256 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+static uint8_t hmac_sha256_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+	0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest[] = {
+	0xc8, 0x57, 0x57, 0x31, 0x03, 0xe0, 0x03, 0x55,
+	0x07, 0xc8, 0x9e, 0x7f, 0x48, 0x9a, 0x61, 0x9a,
+	0x68, 0xee, 0x03, 0x0e, 0x71, 0x75, 0xc7, 0xf4,
+	0x2e, 0x45, 0x26, 0x32, 0x7c, 0x12, 0x15, 0x15 };
+
+static int
+test_AES_CBC_HMAC_SHA256_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA256 :
+					DIGEST_BYTE_LENGTH_SHA256,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_SHA256_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA256);
+	TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA256_digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-SHA512 Hash Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA512  (DIGEST_BYTE_LENGTH_SHA512)
+
+static uint8_t hmac_sha512_key[] = {
+	0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1,
+	0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA,
+	0x58, 0x34, 0x85, 0x65, 0x1C, 0x42, 0x50, 0x76,
+	0x9a, 0xaf, 0x88, 0x1b, 0xb6, 0x8f, 0xf8, 0x60,
+	0xa2, 0x5a, 0x7f, 0x3f, 0xf4, 0x72, 0x70, 0xf1,
+	0xF5, 0x35, 0x4C, 0x3B, 0xDD, 0x90, 0x65, 0xB0,
+	0x47, 0x3a, 0x75, 0x61, 0x5C, 0xa2, 0x10, 0x76,
+	0x9a, 0xaf, 0x77, 0x5b, 0xb6, 0x7f, 0xf7, 0x60 };
+
+static const uint8_t catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest[] = {
+	0x5D, 0x54, 0x66, 0xC1, 0x6E, 0xBC, 0x04, 0xB8,
+	0x46, 0xB8, 0x08, 0x6E, 0xE0, 0xF0, 0x43, 0x48,
+	0x37, 0x96, 0x9C, 0xC6, 0x9C, 0xC2, 0x1E, 0xE8,
+	0xF2, 0x0C, 0x0B, 0xEF, 0x86, 0xA2, 0xE3, 0x70,
+	0x95, 0xC8, 0xB3, 0x06, 0x47, 0xA9, 0x90, 0xE8,
+	0xA0, 0xC6, 0x72, 0x69, 0x05, 0xC0, 0x0D, 0x0E,
+	0x21, 0x96, 0x65, 0x93, 0x74, 0x43, 0x2A, 0x1D,
+	0x2E, 0xBF, 0xC2, 0xC2, 0xEE, 0xCC, 0x2F, 0x0A };
+
+static int
+test_AES_CBC_HMAC_SHA512_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote,	QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+					TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 :
+					DIGEST_BYTE_LENGTH_SHA512,
+			"Generated digest data not as expected");
+
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params);
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_digest_verify(void)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	TEST_ASSERT(test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+			ut_params) == TEST_SUCCESS,
+			"Failed to create session params");
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	return test_AES_CBC_HMAC_SHA512_decrypt_perform(ut_params->sess,
+			ut_params, ts_params);
+}
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
+		struct crypto_unittest_params *ut_params)
+{
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA512_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha512_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA512;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA512;
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_session *sess,
+		struct crypto_unittest_params *ut_params,
+		struct crypto_testsuite_params *ts_params)
+{
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+
+
+	return TEST_SUCCESS;
+}
+
+/* ***** AES-CBC / HMAC-AES_XCBC Chain Tests ***** */
+
+static uint8_t aes_cbc_hmac_aes_xcbc_key[] = {
+	0x87, 0x61, 0x54, 0x53, 0xC4, 0x6D, 0xDD, 0x51,
+	0xE1, 0x9F, 0x86, 0x64, 0x39, 0x0A, 0xE6, 0x59
+	};
+
+static const uint8_t  catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest[] = {
+	0xE0, 0xAC, 0x9A, 0xC4, 0x22, 0x64, 0x35, 0x89,
+	0x77, 0x1D, 0x8B, 0x75
+	};
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_encrypt_digest(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			catch_22_quote, QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->cipher_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)
+		rte_pktmbuf_prepend(ut_params->ibuf,
+				CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC,
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC + QUOTE_512_BYTES,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC,
+			"Generated digest data not as expected");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Generate test mbuf data and space for digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		(const char *)catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+		QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_HMAC_AES_XCBC_digest,
+			DIGEST_BYTE_LENGTH_AES_XCBC);
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC;
+	ut_params->auth_xform.auth.key.length = AES_XCBC_MAC_KEY_SZ;
+	ut_params->auth_xform.auth.key.data = aes_cbc_hmac_aes_xcbc_key;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_AES_XCBC;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(ut_params->ibuf,
+			CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->obuf, uint8_t *) +
+			CIPHER_IV_LENGTH_AES_CBC, catch_22_quote,
+			QUOTE_512_BYTES,
+			"Ciphertext data not as expected");
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+/* ***** AES-GCM Tests ***** */
+
+static int
+test_stats(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_stats stats;
+	struct rte_cryptodev *dev;
+	cryptodev_stats_get_t temp_pfn;
+
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0] + 600,
+			&stats) == -ENODEV),
+		"rte_cryptodev_stats_get invalid dev failed");
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], 0) != 0),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev = &rte_cryptodevs[ts_params->valid_devs[0]];
+	temp_pfn = dev->dev_ops->stats_get;
+	dev->dev_ops->stats_get = (cryptodev_stats_get_t)0;
+	TEST_ASSERT((rte_cryptodev_stats_get(ts_params->valid_devs[0], &stats)
+			== -ENOTSUP),
+		"rte_cryptodev_stats_get invalid Param failed");
+	dev->dev_ops->stats_get = temp_pfn;
+
+	/* Test expected values */
+	ut_setup();
+	test_AES_CBC_HMAC_SHA1_encrypt_digest();
+	ut_teardown();
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.enqueue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeue_err_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* invalid device but should ignore and not reset device stats*/
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0] + 300);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+		"rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 1),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	/* check that a valid reset clears stats */
+	rte_cryptodev_stats_reset(ts_params->valid_devs[0]);
+	TEST_ASSERT_SUCCESS(rte_cryptodev_stats_get(ts_params->valid_devs[0],
+			&stats),
+					  "rte_cryptodev_stats_get failed");
+	TEST_ASSERT((stats.enqueued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+	TEST_ASSERT((stats.dequeued_count == 0),
+		"rte_cryptodev_stats_get returned unexpected enqueued stat");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_multi_session(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	struct rte_cryptodev_info dev_info;
+	struct rte_cryptodev_session **sessions;
+
+	uint16_t i;
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+
+	sessions = rte_malloc(NULL, (sizeof(struct rte_cryptodev_session *) *
+			dev_info.max_nb_sessions) + 1, 0);
+
+	/* Create multiple crypto sessions*/
+	for (i = 0; i < dev_info.max_nb_sessions; i++) {
+		sessions[i] = rte_cryptodev_session_create(
+				ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+		TEST_ASSERT_NOT_NULL(sessions[i],
+				"Session creation failed at session number %u",
+				i);
+
+		/* Attempt to send a request on each session */
+		TEST_ASSERT_SUCCESS(test_AES_CBC_HMAC_SHA512_decrypt_perform(
+				sessions[i], ut_params, ts_params),
+				"Failed to perform decrypt on request "
+				"number %u.", i);
+	}
+
+	/* Next session create should fail */
+	sessions[i] = rte_cryptodev_session_create(ts_params->valid_devs[0],
+			&ut_params->auth_xform);
+	TEST_ASSERT_NULL(sessions[i],
+			"Session creation succeeded unexpectedly!");
+
+	for (i = 0; i < dev_info.max_nb_sessions; i++)
+		rte_cryptodev_session_free(ts_params->valid_devs[0],
+				sessions[i]);
+
+	rte_free(sessions);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_not_in_place_crypto(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_mbuf *dst_m = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+
+	test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(ut_params);
+
+	/* Create multiple crypto sessions*/
+
+	ut_params->sess = rte_cryptodev_session_create(
+			ts_params->valid_devs[0], &ut_params->auth_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+
+	/* Generate test mbuf data and digest */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+			(const char *)
+			catch_22_quote_2_512_bytes_AES_CBC_ciphertext,
+			QUOTE_512_BYTES, 0);
+
+	ut_params->digest = (uint8_t *)rte_pktmbuf_append(ut_params->ibuf,
+			DIGEST_BYTE_LENGTH_SHA512);
+	TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+	rte_memcpy(ut_params->digest,
+			catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA512_digest,
+			DIGEST_BYTE_LENGTH_SHA512);
+
+	/* Generate Crypto op data structure */
+	ut_params->ol = rte_pktmbuf_offload_alloc(ts_params->mbuf_ol_pool,
+				RTE_PKTMBUF_OL_CRYPTO);
+	TEST_ASSERT_NOT_NULL(ut_params->ol,
+			"Failed to allocate pktmbuf offload");
+
+	ut_params->op = &ut_params->ol->op.crypto;
+
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(ut_params->op, ut_params->sess);
+
+	ut_params->op->digest.data = ut_params->digest;
+	ut_params->op->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, QUOTE_512_BYTES);
+	ut_params->op->digest.length = DIGEST_BYTE_LENGTH_SHA512;
+
+	ut_params->op->iv.data = (uint8_t *)rte_pktmbuf_prepend(
+			ut_params->ibuf, CIPHER_IV_LENGTH_AES_CBC);
+	ut_params->op->iv.phys_addr = rte_pktmbuf_mtophys_offset(
+			ut_params->ibuf, 0);
+	ut_params->op->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	rte_memcpy(ut_params->op->iv.data, aes_cbc_iv,
+			CIPHER_IV_LENGTH_AES_CBC);
+
+	ut_params->op->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_cipher.length = QUOTE_512_BYTES;
+
+	ut_params->op->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+	ut_params->op->data.to_hash.length = QUOTE_512_BYTES;
+
+	ut_params->op->dst.m = dst_m;
+	ut_params->op->dst.offset = 0;
+
+	rte_pktmbuf_offload_attach(ut_params->ibuf, ut_params->ol);
+
+	/* Process crypto operation */
+	ut_params->obuf = process_crypto_request(ts_params->valid_devs[0],
+			ut_params->ibuf);
+	TEST_ASSERT_NOT_NULL(ut_params->obuf, "failed to retrieve obuf");
+
+	/* Validate obuf */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(
+			rte_pktmbuf_mtod(ut_params->op->dst.m, char *),
+			catch_22_quote,
+			QUOTE_512_BYTES,
+			"Plaintext data not as expected");
+
+	/* Validate obuf */
+
+	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
+			"Digest verification failed");
+
+	return TEST_SUCCESS;
+}
+
+
+static struct unit_test_suite cryptodev_qat_testsuite  = {
+	.suite_name = "Crypto QAT Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_dev_id),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_device_configure_invalid_queue_pair_ids),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_queue_pair_descriptor_setup),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_multi_session),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown, test_stats),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static struct unit_test_suite cryptodev_aesni_mb_testsuite  = {
+	.suite_name = "Crypto Device AESNI MB Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA256_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA512_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_encrypt_digest),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_AES_XCBC_decrypt_digest_verify),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_AES_CBC_HMAC_SHA1_encrypt_digest_sessionless),
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_not_in_place_crypto),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_QAT_PMD;
+	return unit_test_suite_runner(&cryptodev_qat_testsuite);
+}
+static struct test_command cryptodev_qat_cmd = {
+	.command = "cryptodev_qat_autotest",
+	.callback = test_cryptodev_qat,
+};
+
+static int
+test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_cmd = {
+	.command = "cryptodev_aesni_mb_autotest",
+	.callback = test_cryptodev_aesni_mb,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_qat_cmd);
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_cmd);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
new file mode 100644
index 0000000..034393e
--- /dev/null
+++ b/app/test/test_cryptodev.h
@@ -0,0 +1,68 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef TEST_CRYPTODEV_H_
+#define TEST_CRYPTODEV_H_
+
+#define HEX_DUMP 0
+
+#define FALSE                           0
+#define TRUE                            1
+
+#define MAX_NUM_OPS_INFLIGHT            (4096)
+#define MIN_NUM_OPS_INFLIGHT            (128)
+#define DEFAULT_NUM_OPS_INFLIGHT        (128)
+
+#define MAX_NUM_QPS_PER_QAT_DEVICE      (2)
+#define DEFAULT_NUM_QPS_PER_QAT_DEVICE  (2)
+#define DEFAULT_BURST_SIZE              (64)
+#define DEFAULT_NUM_XFORMS              (2)
+#define NUM_MBUFS                       (8191)
+#define MBUF_CACHE_SIZE                 (250)
+#define MBUF_SIZE   (2048 + DIGEST_BYTE_LENGTH_SHA512 + \
+				sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+
+#define BYTE_LENGTH(x)				(x/8)
+/* HASH DIGEST LENGTHS */
+#define DIGEST_BYTE_LENGTH_MD5			(BYTE_LENGTH(128))
+#define DIGEST_BYTE_LENGTH_SHA1			(BYTE_LENGTH(160))
+#define DIGEST_BYTE_LENGTH_SHA224		(BYTE_LENGTH(224))
+#define DIGEST_BYTE_LENGTH_SHA256		(BYTE_LENGTH(256))
+#define DIGEST_BYTE_LENGTH_SHA384		(BYTE_LENGTH(384))
+#define DIGEST_BYTE_LENGTH_SHA512		(BYTE_LENGTH(512))
+#define DIGEST_BYTE_LENGTH_AES_XCBC		(BYTE_LENGTH(96))
+#define AES_XCBC_MAC_KEY_SZ			(16)
+
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA1		(12)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA256		(16)
+#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512		(32)
+
+#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
new file mode 100644
index 0000000..f0cca8b
--- /dev/null
+++ b/app/test/test_cryptodev_perf.c
@@ -0,0 +1,2062 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_hexdump.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+
+#define PERF_NUM_OPS_INFLIGHT		(128)
+#define DEFAULT_NUM_REQS_TO_SUBMIT	(10000000)
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_mp;
+	struct rte_mempool *mbuf_ol_pool;
+
+	uint16_t nb_queue_pairs;
+
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+	uint8_t dev_id;
+};
+
+
+#define MAX_NUM_OF_OPS_PER_UT	(128)
+
+struct crypto_unittest_params {
+	struct rte_crypto_xform cipher_xform;
+	struct rte_crypto_xform auth_xform;
+
+	struct rte_cryptodev_session *sess;
+
+	struct rte_crypto_op *op;
+	struct rte_mbuf_offload *ol;
+
+	struct rte_mbuf *obuf[MAX_NUM_OF_OPS_PER_UT];
+	struct rte_mbuf *ibuf[MAX_NUM_OF_OPS_PER_UT];
+
+	uint8_t *digest;
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+
+		rte_memcpy(dst, string, t_len);
+	}
+	return m;
+}
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static enum rte_cryptodev_type gbl_cryptodev_preftest_devtype;
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	unsigned i, nb_devs, valid_dev_id = 0;
+	uint16_t qp_id;
+
+	ts_params->mbuf_mp = rte_mempool_lookup("CRYPTO_PERF_MBUFPOOL");
+	if (ts_params->mbuf_mp == NULL) {
+		/* Not already created so create */
+		ts_params->mbuf_mp = rte_mempool_create("CRYPTO_PERF_MBUFPOOL", NUM_MBUFS,
+			MBUF_SIZE, MBUF_CACHE_SIZE,
+			sizeof(struct rte_pktmbuf_pool_private),
+			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
+			rte_socket_id(), 0);
+		if (ts_params->mbuf_mp == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
+			return TEST_FAILED;
+		}
+	}
+
+	ts_params->mbuf_ol_pool = rte_pktmbuf_offload_pool_create("CRYPTO_OP_POOL",
+				NUM_MBUFS, MBUF_CACHE_SIZE,
+				DEFAULT_NUM_XFORMS *
+				sizeof(struct rte_crypto_xform),
+				rte_socket_id());
+		if (ts_params->mbuf_ol_pool == NULL) {
+			RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+			return TEST_FAILED;
+		}
+
+	/* Create 2 AESNI MB devices if required */
+	if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+		if (nb_devs < 2) {
+			for (i = nb_devs; i < 2; i++) {
+				int dev_id = rte_eal_vdev_init(
+					CRYPTODEV_NAME_AESNI_MB_PMD, NULL);
+
+				TEST_ASSERT(dev_id >= 0,
+					"Failed to create instance %u of pmd : %s",
+					i, CRYPTODEV_NAME_AESNI_MB_PMD);
+			}
+		}
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?");
+		return TEST_FAILED;
+	}
+
+	/* Search for the first valid */
+	for (i = 0; i < nb_devs; i++) {
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type == gbl_cryptodev_preftest_devtype) {
+			ts_params->dev_id = i;
+			valid_dev_id = 1;
+			break;
+		}
+	}
+
+	if (!valid_dev_id)
+		return TEST_FAILED;
+
+	/*
+	 * Using Crypto Device Id 0 by default.
+	 * Since we can't free and re-allocate queue memory always set the queues
+	 * on this device up to max size first so enough memory is allocated for
+	 * any later re-configures needed by other tests
+	 */
+
+	rte_cryptodev_info_get(ts_params->dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = DEFAULT_NUM_QPS_PER_QAT_DEVICE;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+	ts_params->conf.session_mp.nb_objs = info.max_nb_sessions;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u",
+			ts_params->dev_id);
+
+
+	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
+
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+			&ts_params->qp_conf,
+			rte_cryptodev_socket_id(ts_params->dev_id)),
+			"Failed to setup queue pair %u on cryptodev %u",
+			qp_id, ts_params->dev_id);
+	}
+
+	/*Now reconfigure queues to size we actually want to use in this testsuite.*/
+	ts_params->qp_conf.nb_descriptors = PERF_NUM_OPS_INFLIGHT;
+	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs ; qp_id++) {
+
+		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+			ts_params->dev_id, qp_id,
+				&ts_params->qp_conf,
+				rte_cryptodev_socket_id(ts_params->dev_id)),
+				"Failed to setup queue pair %u on cryptodev %u",
+				qp_id, ts_params->dev_id);
+	}
+
+	return TEST_SUCCESS;
+}
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+		rte_mempool_count(ts_params->mbuf_mp));
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	rte_cryptodev_stats_reset(ts_params->dev_id);
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->dev_id),
+			"Failed to start cryptodev %u",
+			ts_params->dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_cryptodev_stats stats;
+
+	unsigned i;
+
+	/* free crypto session structure */
+	if (ut_params->sess)
+		rte_cryptodev_session_free(ts_params->dev_id,
+				ut_params->sess);
+
+	/* free crypto operation structure */
+	if (ut_params->ol)
+		rte_pktmbuf_offload_free(ut_params->ol);
+
+	for (i = 0; i < MAX_NUM_OF_OPS_PER_UT; i++) {
+		if (ut_params->obuf[i])
+			rte_pktmbuf_free(ut_params->obuf[i]);
+		else if (ut_params->ibuf[i])
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+	}
+
+	if (ts_params->mbuf_mp != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_MBUFPOOL count %u\n",
+			rte_mempool_count(ts_params->mbuf_mp));
+
+	rte_cryptodev_stats_get(ts_params->dev_id, &stats);
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->dev_id);
+}
+
+const char plaintext_quote[] =
+		"THE COUNT OF MONTE CRISTO by Alexandre Dumas, Pere Chapter 1. "
+		"Marseilles--The Arrival. On the 24th of February, 1815, the "
+		"look-out at Notre-Dame de la Garde signalled the three-master,"
+		" the Pharaon from Smyrna, Trieste, and Naples. As usual, a "
+		"pilot put off immediately, and rounding the Chateau d'If, got "
+		"on board the vessel between Cape Morgion and Rion island. "
+		"Immediately, and according to custom, the ramparts of Fort "
+		"Saint-Jean were covered with spectators; it is always an event "
+		"at Marseilles for a ship to come into port, especially when "
+		"this ship, like the Pharaon, has been built, rigged, and laden"
+		" at the old Phocee docks, and belongs to an owner of the city."
+		" The ship drew on and had safely passed the strait, which some"
+		" volcanic shock has made between the Calasareigne and Jaros "
+		"islands; had doubled Pomegue, and approached the harbor under"
+		" topsails, jib, and spanker, but so slowly and sedately that"
+		" the idlers, with that instinct which is the forerunner of "
+		"evil, asked one another what misfortune could have happened "
+		"on board. However, those experienced in navigation saw plainly"
+		" that if any accident had occurred, it was not to the vessel "
+		"herself, for she bore down with all the evidence of being "
+		"skilfully handled, the anchor a-cockbill, the jib-boom guys "
+		"already eased off, and standing by the side of the pilot, who"
+		" was steering the Pharaon towards the narrow entrance of the"
+		" inner port, was a young man, who, with activity and vigilant"
+		" eye, watched every motion of the ship, and repeated each "
+		"direction of the pilot. The vague disquietude which prevailed "
+		"among the spectators had so much affected one of the crowd "
+		"that he did not await the arrival of the vessel in harbor, but"
+		" jumping into a small skiff, desired to be pulled alongside "
+		"the Pharaon, which he reached as she rounded into La Reserve "
+		"basin. When the young man on board saw this person approach, "
+		"he left his station by the pilot, and, hat in hand, leaned "
+		"over the ship's bulwarks. He was a fine, tall, slim young "
+		"fellow of eighteen or twenty, with black eyes, and hair as "
+		"dark as a raven's wing; and his whole appearance bespoke that "
+		"calmness and resolution peculiar to men accustomed from their "
+		"cradle to contend with danger. \"Ah, is it you, Dantes?\" "
+		"cried the man in the skiff. \"What's the matter? and why have "
+		"you such an air of sadness aboard?\" \"A great misfortune, M. "
+		"Morrel,\" replied the young man,--\"a great misfortune, for me"
+		" especially! Off Civita Vecchia we lost our brave Captain "
+		"Leclere.\" \"And the cargo?\" inquired the owner, eagerly. "
+		"\"Is all safe, M. Morrel; and I think you will be satisfied on"
+		" that head. But poor Captain Leclere--\" \"What happened to "
+		"him?\" asked the owner, with an air of considerable "
+		"resignation. \"What happened to the worthy captain?\" \"He "
+		"died.\" \"Fell into the sea?\" \"No, sir, he died of "
+		"brain-fever in dreadful agony.\" Then turning to the crew, "
+		"he said, \"Bear a hand there, to take in sail!\" All hands "
+		"obeyed, and at once the eight or ten seamen who composed the "
+		"crew, sprang to their respective stations at the spanker "
+		"brails and outhaul, topsail sheets and halyards, the jib "
+		"downhaul, and the topsail clewlines and buntlines. The young "
+		"sailor gave a look to see that his orders were promptly and "
+		"accurately obeyed, and then turned again to the owner. \"And "
+		"how did this misfortune occur?\" inquired the latter, resuming"
+		" the interrupted conversation. \"Alas, sir, in the most "
+		"unexpected manner. After a long talk with the harbor-master, "
+		"Captain Leclere left Naples greatly disturbed in mind. In "
+		"twenty-four hours he was attacked by a fever, and died three "
+		"days afterwards. We performed the usual burial service, and he"
+		" is at his rest, sewn up in his hammock with a thirty-six "
+		"pound shot at his head and his heels, off El Giglio island. "
+		"We bring to his widow his sword and cross of honor. It was "
+		"worth while, truly,\" added the young man with a melancholy "
+		"smile, \"to make war against the English for ten years, and "
+		"to die in his bed at last, like everybody else.";
+
+#define QUOTE_LEN_64B		(64)
+#define QUOTE_LEN_128B		(128)
+#define QUOTE_LEN_256B		(256)
+#define QUOTE_LEN_512B		(512)
+#define QUOTE_LEN_768B		(768)
+#define QUOTE_LEN_1024B		(1024)
+#define QUOTE_LEN_1280B		(1280)
+#define QUOTE_LEN_1536B		(1536)
+#define QUOTE_LEN_1792B		(1792)
+#define QUOTE_LEN_2048B		(2048)
+
+
+/* ***** AES-CBC / HMAC-SHA256 Performance Tests ***** */
+
+#define HMAC_KEY_LENGTH_SHA256	(DIGEST_BYTE_LENGTH_SHA256)
+
+#define CIPHER_KEY_LENGTH_AES_CBC	(16)
+#define CIPHER_IV_LENGTH_AES_CBC	(CIPHER_KEY_LENGTH_AES_CBC)
+
+
+static uint8_t aes_cbc_key[] = {
+		0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2,
+		0xF1, 0x35, 0x5C, 0x3B, 0xDD, 0x9A, 0x65, 0xBA };
+
+static uint8_t aes_cbc_iv[] = {
+		0xf5, 0xd3, 0x89, 0x0f, 0x47, 0x00, 0xcb, 0x52,
+		0x42, 0x1a, 0x7d, 0x3d, 0xf5, 0x82, 0x80, 0xf1 };
+
+static uint8_t hmac_sha256_key[] = {
+		0xff, 0xcb, 0x37, 0x30, 0x1d, 0x4a, 0xc2, 0x41,
+		0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
+		0x58, 0x34, 0x85, 0x61, 0x1C, 0x42, 0x10, 0x76,
+		0x9a, 0x4f, 0x88, 0x1b, 0xb6, 0x8f, 0xd8, 0x60 };
+
+
+/* Cipher text output */
+
+static const uint8_t AES_CBC_ciphertext_64B[] = {
+		0x05, 0x15, 0x77, 0x32, 0xc9, 0x66, 0x91, 0x50,
+		0x93, 0x9f, 0xbb, 0x4e, 0x2e, 0x5a, 0x02, 0xd0,
+		0x2d, 0x9d, 0x31, 0x5d, 0xc8, 0x9e, 0x86, 0x36,
+		0x54, 0x5c, 0x50, 0xe8, 0x75, 0x54, 0x74, 0x5e,
+		0xd5, 0xa2, 0x84, 0x21, 0x2d, 0xc5, 0xf8, 0x1c,
+		0x55, 0x1a, 0xba, 0x91, 0xce, 0xb5, 0xa3, 0x1e,
+		0x31, 0xbf, 0xe9, 0xa1, 0x97, 0x5c, 0x2b, 0xd6,
+		0x57, 0xa5, 0x9f, 0xab, 0xbd, 0xb0, 0x9b, 0x9c
+};
+
+static const uint8_t AES_CBC_ciphertext_128B[] = {
+		0x79, 0x92, 0x65, 0xc8, 0xfb, 0x0a, 0xc7, 0xc4,
+		0x9b, 0x3b, 0xbe, 0x69, 0x7f, 0x7c, 0xf4, 0x4e,
+		0xa5, 0x0d, 0xf6, 0x33, 0xc4, 0xdf, 0xf3, 0x0d,
+		0xdb, 0xb9, 0x68, 0x34, 0xb0, 0x0d, 0xbd, 0xb9,
+		0xa7, 0xf3, 0x86, 0x50, 0x2a, 0xbe, 0x50, 0x5d,
+		0xb3, 0xbe, 0x72, 0xf9, 0x02, 0xb1, 0x69, 0x0b,
+		0x8c, 0x96, 0x4c, 0x3c, 0x0c, 0x1e, 0x76, 0xe5,
+		0x7e, 0x75, 0xdd, 0xd0, 0xa9, 0x75, 0x00, 0x13,
+		0x6b, 0x1e, 0xc0, 0xad, 0xfc, 0x03, 0xb5, 0x99,
+		0xdc, 0x37, 0x35, 0xfc, 0x16, 0x34, 0xfd, 0xb4,
+		0xea, 0x1e, 0xb6, 0x51, 0xdf, 0xab, 0x87, 0xd6,
+		0x87, 0x41, 0xfa, 0x1c, 0xc6, 0x78, 0xa6, 0x3c,
+		0x1d, 0x76, 0xfe, 0xff, 0x65, 0xfc, 0x63, 0x1e,
+		0x1f, 0xe2, 0x7c, 0x9b, 0xa2, 0x72, 0xc3, 0x34,
+		0x23, 0xdf, 0x01, 0xf0, 0xfd, 0x02, 0x8b, 0x97,
+		0x00, 0x2b, 0x97, 0x4e, 0xab, 0x98, 0x21, 0x3c
+};
+
+static const uint8_t AES_CBC_ciphertext_256B[] = {
+		0xc7, 0x71, 0x2b, 0xed, 0x2c, 0x97, 0x59, 0xfa,
+		0xcf, 0x5a, 0xb9, 0x31, 0x92, 0xe0, 0xc9, 0x92,
+		0xc0, 0x2d, 0xd5, 0x9c, 0x84, 0xbf, 0x70, 0x36,
+		0x13, 0x48, 0xe0, 0xb1, 0xbf, 0x6c, 0xcd, 0x91,
+		0xa0, 0xc3, 0x57, 0x6c, 0x3f, 0x0e, 0x34, 0x41,
+		0xe7, 0x9c, 0xc0, 0xec, 0x18, 0x0c, 0x05, 0x52,
+		0x78, 0xe2, 0x3c, 0x6e, 0xdf, 0xa5, 0x49, 0xc7,
+		0xf2, 0x55, 0x00, 0x8f, 0x65, 0x6d, 0x4b, 0xd0,
+		0xcb, 0xd4, 0xd2, 0x0b, 0xea, 0xf4, 0xb0, 0x85,
+		0x61, 0x9e, 0x36, 0xc0, 0x71, 0xb7, 0x80, 0xad,
+		0x40, 0x78, 0xb4, 0x70, 0x2b, 0xe8, 0x80, 0xc5,
+		0x19, 0x35, 0x96, 0x55, 0x3b, 0x40, 0x03, 0xbb,
+		0x9f, 0xa6, 0xc2, 0x82, 0x92, 0x04, 0xc3, 0xa6,
+		0x96, 0xc4, 0x7f, 0x4c, 0x3e, 0x3c, 0x79, 0x82,
+		0x88, 0x8b, 0x3f, 0x8b, 0xc5, 0x9f, 0x44, 0xbe,
+		0x71, 0xe7, 0x09, 0xa2, 0x40, 0xa2, 0x23, 0x4e,
+		0x9f, 0x31, 0xab, 0x6f, 0xdf, 0x59, 0x40, 0xe1,
+		0x12, 0x15, 0x55, 0x4b, 0xea, 0x3f, 0xa1, 0x41,
+		0x4f, 0xaf, 0xcd, 0x27, 0x2a, 0x61, 0xa1, 0x9e,
+		0x82, 0x30, 0x05, 0x05, 0x55, 0xce, 0x99, 0xd3,
+		0x8f, 0x3f, 0x86, 0x79, 0xdc, 0x9f, 0x33, 0x07,
+		0x75, 0x26, 0xc8, 0x72, 0x81, 0x0f, 0x9b, 0xf7,
+		0xb1, 0xfb, 0xd3, 0x91, 0x36, 0x08, 0xab, 0x26,
+		0x70, 0x53, 0x0c, 0x99, 0xfd, 0xa9, 0x07, 0xb4,
+		0xe9, 0xce, 0xc1, 0xd6, 0xd2, 0x2c, 0x71, 0x80,
+		0xec, 0x59, 0x61, 0x0b, 0x24, 0xf0, 0x6d, 0x33,
+		0x73, 0x45, 0x6e, 0x80, 0x03, 0x45, 0xf2, 0x76,
+		0xa5, 0x8a, 0xc9, 0xcf, 0xaf, 0x4a, 0xed, 0x35,
+		0xc0, 0x97, 0x52, 0xc5, 0x00, 0xdf, 0xef, 0xc7,
+		0x9f, 0xf2, 0xe8, 0x15, 0x3e, 0xb3, 0x30, 0xe7,
+		0x00, 0xd0, 0x4e, 0xeb, 0x79, 0xf6, 0xf6, 0xcf,
+		0xf0, 0xe7, 0x61, 0xd5, 0x3d, 0x6a, 0x73, 0x9d
+};
+
+static const uint8_t AES_CBC_ciphertext_512B[] = {
+		0xb4, 0xc6, 0xc6, 0x5f, 0x7e, 0xca, 0x05, 0x70,
+		0x21, 0x7b, 0x92, 0x9e, 0x23, 0xe7, 0x92, 0xb8,
+		0x27, 0x3d, 0x20, 0x29, 0x57, 0xfa, 0x1f, 0x26,
+		0x0a, 0x04, 0x34, 0xa6, 0xf2, 0xdc, 0x44, 0xb6,
+		0x43, 0x40, 0x62, 0xde, 0x0c, 0xde, 0x1c, 0x30,
+		0x43, 0x85, 0x0b, 0xe8, 0x93, 0x1f, 0xa1, 0x2a,
+		0x8a, 0x27, 0x35, 0x39, 0x14, 0x9f, 0x37, 0x64,
+		0x59, 0xb5, 0x0e, 0x96, 0x82, 0x5d, 0x63, 0x45,
+		0xd6, 0x93, 0x89, 0x46, 0xe4, 0x71, 0x31, 0xeb,
+		0x0e, 0xd1, 0x7b, 0xda, 0x90, 0xb5, 0x81, 0xac,
+		0x76, 0x54, 0x54, 0x85, 0x0b, 0xa9, 0x46, 0x9c,
+		0xf0, 0xfd, 0xde, 0x5d, 0xa8, 0xe3, 0xee, 0xe9,
+		0xf4, 0x9d, 0x34, 0x76, 0x39, 0xe7, 0xc3, 0x4a,
+		0x84, 0x38, 0x92, 0x61, 0xf1, 0x12, 0x9f, 0x05,
+		0xda, 0xdb, 0xc1, 0xd4, 0xb0, 0xa0, 0x27, 0x19,
+		0xa0, 0x56, 0x5d, 0x9b, 0xcc, 0x47, 0x7c, 0x15,
+		0x1d, 0x52, 0x66, 0xd5, 0xff, 0xef, 0x12, 0x23,
+		0x86, 0xe2, 0xee, 0x81, 0x2c, 0x3d, 0x7d, 0x28,
+		0xd5, 0x42, 0xdf, 0xdb, 0x75, 0x1c, 0xeb, 0xdf,
+		0x13, 0x23, 0xd5, 0x17, 0x89, 0xea, 0xd7, 0x01,
+		0xff, 0x57, 0x6a, 0x44, 0x61, 0xf4, 0xea, 0xbe,
+		0x97, 0x9b, 0xc2, 0xb1, 0x9c, 0x5d, 0xff, 0x4f,
+		0x73, 0x2d, 0x3f, 0x57, 0x28, 0x38, 0xbf, 0x3d,
+		0x9f, 0xda, 0x49, 0x55, 0x8f, 0xb2, 0x77, 0xec,
+		0x0f, 0xbc, 0xce, 0xb8, 0xc6, 0xe1, 0x03, 0xed,
+		0x35, 0x9c, 0xf2, 0x4d, 0xa4, 0x29, 0x6c, 0xd6,
+		0x6e, 0x05, 0x53, 0x46, 0xc1, 0x41, 0x09, 0x36,
+		0x0b, 0x7d, 0xf4, 0x9e, 0x0f, 0xba, 0x86, 0x33,
+		0xdd, 0xf1, 0xa7, 0xf7, 0xd5, 0x29, 0xa8, 0xa7,
+		0x4d, 0xce, 0x0c, 0xf5, 0xb4, 0x6c, 0xd8, 0x27,
+		0xb0, 0x87, 0x2a, 0x6f, 0x7f, 0x3f, 0x8f, 0xc3,
+		0xe2, 0x3e, 0x94, 0xcf, 0x61, 0x4a, 0x09, 0x3d,
+		0xf9, 0x55, 0x19, 0x31, 0xf2, 0xd2, 0x4a, 0x3e,
+		0xc1, 0xf5, 0xed, 0x7c, 0x45, 0xb0, 0x0c, 0x7b,
+		0xdd, 0xa6, 0x0a, 0x26, 0x66, 0xec, 0x85, 0x49,
+		0x00, 0x38, 0x05, 0x7c, 0x9c, 0x1c, 0x92, 0xf5,
+		0xf7, 0xdb, 0x5d, 0xbd, 0x61, 0x0c, 0xc9, 0xaf,
+		0xfd, 0x57, 0x3f, 0xee, 0x2b, 0xad, 0x73, 0xef,
+		0xa3, 0xc1, 0x66, 0x26, 0x44, 0x5e, 0xf9, 0x12,
+		0x86, 0x66, 0xa9, 0x61, 0x75, 0xa1, 0xbc, 0x40,
+		0x7f, 0xa8, 0x08, 0x02, 0xc0, 0x76, 0x0e, 0x76,
+		0xb3, 0x26, 0x3d, 0x1c, 0x40, 0x65, 0xe4, 0x18,
+		0x0f, 0x62, 0x17, 0x8f, 0x1e, 0x61, 0xb8, 0x08,
+		0x83, 0x54, 0x42, 0x11, 0x03, 0x30, 0x8e, 0xb7,
+		0xc1, 0x9c, 0xec, 0x69, 0x52, 0x95, 0xfb, 0x7b,
+		0x1a, 0x0c, 0x20, 0x24, 0xf7, 0xb8, 0x38, 0x0c,
+		0xb8, 0x7b, 0xb6, 0x69, 0x70, 0xd0, 0x61, 0xb9,
+		0x70, 0x06, 0xc2, 0x5b, 0x20, 0x47, 0xf7, 0xd9,
+		0x32, 0xc2, 0xf2, 0x90, 0xb6, 0x4d, 0xcd, 0x3c,
+		0x6d, 0x74, 0xea, 0x82, 0x35, 0x1b, 0x08, 0x44,
+		0xba, 0xb7, 0x33, 0x82, 0x33, 0x27, 0x54, 0x77,
+		0x6e, 0x58, 0xfe, 0x46, 0x5a, 0xb4, 0x88, 0x53,
+		0x8d, 0x9b, 0xb1, 0xab, 0xdf, 0x04, 0xe1, 0xfb,
+		0xd7, 0x1e, 0xd7, 0x38, 0x64, 0x54, 0xba, 0xb0,
+		0x6c, 0x84, 0x7a, 0x0f, 0xa7, 0x80, 0x6b, 0x86,
+		0xd9, 0xc9, 0xc6, 0x31, 0x95, 0xfa, 0x8a, 0x2c,
+		0x14, 0xe1, 0x85, 0x66, 0x27, 0xfd, 0x63, 0x3e,
+		0xf0, 0xfa, 0x81, 0xc9, 0x89, 0x4f, 0xe2, 0x6a,
+		0x8c, 0x17, 0xb5, 0xc7, 0x9f, 0x5d, 0x3f, 0x6b,
+		0x3f, 0xcd, 0x13, 0x7a, 0x3c, 0xe6, 0x4e, 0xfa,
+		0x7a, 0x10, 0xb8, 0x7c, 0x40, 0xec, 0x93, 0x11,
+		0x1f, 0xd0, 0x9e, 0xc3, 0x56, 0xb9, 0xf5, 0x21,
+		0x18, 0x41, 0x31, 0xea, 0x01, 0x8d, 0xea, 0x1c,
+		0x95, 0x5e, 0x56, 0x33, 0xbc, 0x7a, 0x3f, 0x6f
+};
+
+static const uint8_t AES_CBC_ciphertext_768B[] = {
+		0x3e, 0x7f, 0x9e, 0x4c, 0x88, 0x15, 0x68, 0x69,
+		0x10, 0x09, 0xe1, 0xa7, 0x0f, 0x27, 0x88, 0x2d,
+		0x90, 0x73, 0x4f, 0x67, 0xd3, 0x8b, 0xaf, 0xa1,
+		0x2c, 0x37, 0xa5, 0x6c, 0x7c, 0xbd, 0x95, 0x4c,
+		0x82, 0xcf, 0x05, 0x49, 0x16, 0x5c, 0xe7, 0x06,
+		0xd4, 0xcb, 0x55, 0x65, 0x9a, 0xd0, 0xe1, 0x46,
+		0x3a, 0x37, 0x71, 0xad, 0xb0, 0xb4, 0x99, 0x1e,
+		0x23, 0x57, 0x48, 0x96, 0x9c, 0xc5, 0xc4, 0xdb,
+		0x64, 0x3e, 0xc9, 0x7f, 0x90, 0x5a, 0xa0, 0x08,
+		0x75, 0x4c, 0x09, 0x06, 0x31, 0x6e, 0x59, 0x29,
+		0xfc, 0x2f, 0x72, 0xde, 0xf2, 0x40, 0x5a, 0xfe,
+		0xd3, 0x66, 0x64, 0xb8, 0x9c, 0xc9, 0xa6, 0x1f,
+		0xc3, 0x52, 0xcd, 0xb5, 0xd1, 0x4f, 0x43, 0x3f,
+		0xf4, 0x59, 0x25, 0xc4, 0xdd, 0x3e, 0x58, 0x7c,
+		0x21, 0xd6, 0x21, 0xce, 0xa4, 0xbe, 0x08, 0x23,
+		0x46, 0x68, 0xc0, 0x00, 0x91, 0x47, 0xca, 0x9b,
+		0xe0, 0xb4, 0xe3, 0xab, 0xbf, 0xcf, 0x68, 0x26,
+		0x97, 0x23, 0x09, 0x93, 0x64, 0x8f, 0x57, 0x59,
+		0xe2, 0x41, 0x7c, 0xa2, 0x48, 0x7e, 0xd5, 0x2c,
+		0x54, 0x09, 0x1b, 0x07, 0x94, 0xca, 0x39, 0x83,
+		0xdd, 0xf4, 0x7a, 0x1d, 0x2d, 0xdd, 0x67, 0xf7,
+		0x3c, 0x30, 0x89, 0x3e, 0xc1, 0xdc, 0x1d, 0x8f,
+		0xfc, 0xb1, 0xe9, 0x13, 0x31, 0xb0, 0x16, 0xdb,
+		0x88, 0xf2, 0x32, 0x7e, 0x73, 0xa3, 0xdf, 0x08,
+		0x6b, 0x53, 0x92, 0x08, 0xc9, 0x9d, 0x98, 0xb2,
+		0xf4, 0x8c, 0xb1, 0x95, 0xdc, 0xb6, 0xfc, 0xec,
+		0xf1, 0xc9, 0x0d, 0x6d, 0x42, 0x2c, 0xf5, 0x38,
+		0x29, 0xf4, 0xd8, 0x98, 0x0f, 0xb0, 0x81, 0xa5,
+		0xaa, 0xe6, 0x1f, 0x6e, 0x87, 0x32, 0x1b, 0x02,
+		0x07, 0x57, 0x38, 0x83, 0xf3, 0xe4, 0x54, 0x7c,
+		0xa8, 0x43, 0xdf, 0x3f, 0x42, 0xfd, 0x67, 0x28,
+		0x06, 0x4d, 0xea, 0xce, 0x1f, 0x84, 0x4a, 0xcd,
+		0x8c, 0x61, 0x5e, 0x8f, 0x61, 0xed, 0x84, 0x03,
+		0x53, 0x6a, 0x9e, 0xbf, 0x68, 0x83, 0xa7, 0x42,
+		0x56, 0x57, 0xcd, 0x45, 0x29, 0xfc, 0x7b, 0x07,
+		0xfc, 0xe9, 0xb9, 0x42, 0xfd, 0x29, 0xd5, 0xfd,
+		0x98, 0x11, 0xd1, 0x8d, 0x67, 0x29, 0x47, 0x61,
+		0xd8, 0x27, 0x37, 0x79, 0x29, 0xd1, 0x94, 0x6f,
+		0x8d, 0xf3, 0x1b, 0x3d, 0x6a, 0xb1, 0x59, 0xef,
+		0x1b, 0xd4, 0x70, 0x0e, 0xac, 0xab, 0xa0, 0x2b,
+		0x1f, 0x5e, 0x04, 0xf0, 0x0e, 0x35, 0x72, 0x90,
+		0xfc, 0xcf, 0x86, 0x43, 0xea, 0x45, 0x6d, 0x22,
+		0x63, 0x06, 0x1a, 0x58, 0xd7, 0x2d, 0xc5, 0xb0,
+		0x60, 0x69, 0xe8, 0x53, 0xc2, 0xa2, 0x57, 0x83,
+		0xc4, 0x31, 0xb4, 0xc6, 0xb3, 0xa1, 0x77, 0xb3,
+		0x1c, 0xca, 0x89, 0x3f, 0xf5, 0x10, 0x3b, 0x36,
+		0x31, 0x7d, 0x00, 0x46, 0x00, 0x92, 0xa0, 0xa0,
+		0x34, 0xd8, 0x5e, 0x62, 0xa9, 0xe0, 0x23, 0x37,
+		0x50, 0x85, 0xc7, 0x3a, 0x20, 0xa3, 0x98, 0xc0,
+		0xac, 0x20, 0x06, 0x0f, 0x17, 0x3c, 0xfc, 0x43,
+		0x8c, 0x9d, 0xec, 0xf5, 0x9a, 0x35, 0x96, 0xf7,
+		0xb7, 0x4c, 0xf9, 0x69, 0xf8, 0xd4, 0x1e, 0x9e,
+		0xf9, 0x7c, 0xc4, 0xd2, 0x11, 0x14, 0x41, 0xb9,
+		0x89, 0xd6, 0x07, 0xd2, 0x37, 0x07, 0x5e, 0x5e,
+		0xae, 0x60, 0xdc, 0xe4, 0xeb, 0x38, 0x48, 0x6d,
+		0x95, 0x8d, 0x71, 0xf2, 0xba, 0xda, 0x5f, 0x08,
+		0x9d, 0x4a, 0x0f, 0x56, 0x90, 0x64, 0xab, 0xb6,
+		0x88, 0x22, 0xa8, 0x90, 0x1f, 0x76, 0x2c, 0x83,
+		0x43, 0xce, 0x32, 0x55, 0x45, 0x84, 0x57, 0x43,
+		0xf9, 0xa8, 0xd1, 0x4f, 0xe3, 0xc1, 0x72, 0x9c,
+		0xeb, 0x64, 0xf7, 0xe4, 0x61, 0x2b, 0x93, 0xd1,
+		0x1f, 0xbb, 0x5c, 0xff, 0xa1, 0x59, 0x69, 0xcf,
+		0xf7, 0xaf, 0x58, 0x45, 0xd5, 0x3e, 0x98, 0x7d,
+		0x26, 0x39, 0x5c, 0x75, 0x3c, 0x4a, 0xbf, 0x5e,
+		0x12, 0x10, 0xb0, 0x93, 0x0f, 0x86, 0x82, 0xcf,
+		0xb2, 0xec, 0x70, 0x5c, 0x0b, 0xad, 0x5d, 0x63,
+		0x65, 0x32, 0xa6, 0x04, 0x58, 0x03, 0x91, 0x2b,
+		0xdb, 0x8f, 0xd3, 0xa3, 0x2b, 0x3a, 0xf5, 0xa1,
+		0x62, 0x6c, 0xb6, 0xf0, 0x13, 0x3b, 0x8c, 0x07,
+		0x10, 0x82, 0xc9, 0x56, 0x24, 0x87, 0xfc, 0x56,
+		0xe8, 0xef, 0x90, 0x8b, 0xd6, 0x48, 0xda, 0x53,
+		0x04, 0x49, 0x41, 0xa4, 0x67, 0xe0, 0x33, 0x24,
+		0x6b, 0x9c, 0x07, 0x55, 0x4c, 0x5d, 0xe9, 0x35,
+		0xfa, 0xbd, 0xea, 0xa8, 0x3f, 0xe9, 0xf5, 0x20,
+		0x5c, 0x60, 0x0f, 0x0d, 0x24, 0xcb, 0x1a, 0xd6,
+		0xe8, 0x5c, 0xa8, 0x42, 0xae, 0xd0, 0xd2, 0xf2,
+		0xa8, 0xbe, 0xea, 0x0f, 0x8d, 0xfb, 0x81, 0xa3,
+		0xa4, 0xef, 0xb7, 0x3e, 0x91, 0xbd, 0x26, 0x0f,
+		0x8e, 0xf1, 0xb2, 0xa5, 0x47, 0x06, 0xfa, 0x40,
+		0x8b, 0x31, 0x7a, 0x5a, 0x74, 0x2a, 0x0a, 0x7c,
+		0x62, 0x5d, 0x39, 0xa4, 0xae, 0x14, 0x85, 0x08,
+		0x5b, 0x20, 0x85, 0xf1, 0x57, 0x6e, 0x71, 0x13,
+		0x4e, 0x2b, 0x49, 0x87, 0x01, 0xdf, 0x37, 0xed,
+		0x28, 0xee, 0x4d, 0xa1, 0xf4, 0xb3, 0x3b, 0xba,
+		0x2d, 0xb3, 0x46, 0x17, 0x84, 0x80, 0x9d, 0xd7,
+		0x93, 0x1f, 0x28, 0x7c, 0xf5, 0xf9, 0xd6, 0x85,
+		0x8c, 0xa5, 0x44, 0xe9, 0x2c, 0x65, 0x51, 0x5f,
+		0x53, 0x7a, 0x09, 0xd9, 0x30, 0x16, 0x95, 0x89,
+		0x9c, 0x0b, 0xef, 0x90, 0x6d, 0x23, 0xd3, 0x48,
+		0x57, 0x3b, 0x55, 0x69, 0x96, 0xfc, 0xf7, 0x52,
+		0x92, 0x38, 0x36, 0xbf, 0xa9, 0x0a, 0xbb, 0x68,
+		0x45, 0x08, 0x25, 0xee, 0x59, 0xfe, 0xee, 0xf2,
+		0x2c, 0xd4, 0x5f, 0x78, 0x59, 0x0d, 0x90, 0xf1,
+		0xd7, 0xe4, 0x39, 0x0e, 0x46, 0x36, 0xf5, 0x75,
+		0x03, 0x3c, 0x28, 0xfb, 0xfa, 0x8f, 0xef, 0xc9,
+		0x61, 0x00, 0x94, 0xc3, 0xd2, 0x0f, 0xd9, 0xda
+};
+
+static const uint8_t AES_CBC_ciphertext_1024B[] = {
+		0x7d, 0x01, 0x7e, 0x2f, 0x92, 0xb3, 0xea, 0x72,
+		0x4a, 0x3f, 0x10, 0xf9, 0x2b, 0xb0, 0xd5, 0xb9,
+		0x19, 0x68, 0x94, 0xe9, 0x93, 0xe9, 0xd5, 0x26,
+		0x20, 0x44, 0xe2, 0x47, 0x15, 0x8d, 0x75, 0x48,
+		0x8e, 0xe4, 0x40, 0x81, 0xb5, 0x06, 0xa8, 0xb8,
+		0x0e, 0x0f, 0x3b, 0xbc, 0x5b, 0xbe, 0x3b, 0xa2,
+		0x2a, 0x0c, 0x48, 0x98, 0x19, 0xdf, 0xe9, 0x25,
+		0x75, 0xab, 0x93, 0x44, 0xb1, 0x72, 0x70, 0xbb,
+		0x20, 0xcf, 0x78, 0xe9, 0x4d, 0xc6, 0xa9, 0xa9,
+		0x84, 0x78, 0xc5, 0xc0, 0xc4, 0xc9, 0x79, 0x1a,
+		0xbc, 0x61, 0x25, 0x5f, 0xac, 0x01, 0x03, 0xb7,
+		0xef, 0x07, 0xf2, 0x62, 0x98, 0xee, 0xe3, 0xad,
+		0x94, 0x75, 0x30, 0x67, 0xb9, 0x15, 0x00, 0xe7,
+		0x11, 0x32, 0x2e, 0x6b, 0x55, 0x9f, 0xac, 0x68,
+		0xde, 0x61, 0x05, 0x80, 0x01, 0xf3, 0xad, 0xab,
+		0xaf, 0x45, 0xe0, 0xf4, 0x68, 0x5c, 0xc0, 0x52,
+		0x92, 0xc8, 0x21, 0xb6, 0xf5, 0x8a, 0x1d, 0xbb,
+		0xfc, 0x4a, 0x11, 0x62, 0xa2, 0xc4, 0xf1, 0x2d,
+		0x0e, 0xb2, 0xc7, 0x17, 0x34, 0xb4, 0x2a, 0x54,
+		0x81, 0xc2, 0x1e, 0xcf, 0x51, 0x0a, 0x76, 0x54,
+		0xf1, 0x48, 0x0d, 0x5c, 0xcd, 0x38, 0x3e, 0x38,
+		0x3e, 0xf8, 0x46, 0x1d, 0x00, 0xf5, 0x62, 0xe1,
+		0x5c, 0xb7, 0x8d, 0xce, 0xd0, 0x3f, 0xbb, 0x22,
+		0xf1, 0xe5, 0xb1, 0xa0, 0x58, 0x5e, 0x3c, 0x0f,
+		0x15, 0xd1, 0xac, 0x3e, 0xc7, 0x72, 0xc4, 0xde,
+		0x8b, 0x95, 0x3e, 0x91, 0xf7, 0x1d, 0x04, 0x9a,
+		0xc8, 0xe4, 0xbf, 0xd3, 0x22, 0xca, 0x4a, 0xdc,
+		0xb6, 0x16, 0x79, 0x81, 0x75, 0x2f, 0x6b, 0xa7,
+		0x04, 0x98, 0xa7, 0x4e, 0xc1, 0x19, 0x90, 0x33,
+		0x33, 0x3c, 0x7f, 0xdd, 0xac, 0x09, 0x0c, 0xc3,
+		0x91, 0x34, 0x74, 0xab, 0xa5, 0x35, 0x0a, 0x13,
+		0xc3, 0x56, 0x67, 0x6d, 0x1a, 0x3e, 0xbf, 0x56,
+		0x06, 0x67, 0x15, 0x5f, 0xfc, 0x8b, 0xa2, 0x3c,
+		0x5e, 0xaf, 0x56, 0x1f, 0xe3, 0x2e, 0x9d, 0x0a,
+		0xf9, 0x9b, 0xc7, 0xb5, 0x03, 0x1c, 0x68, 0x99,
+		0xfa, 0x3c, 0x37, 0x59, 0xc1, 0xf7, 0x6a, 0x83,
+		0x22, 0xee, 0xca, 0x7f, 0x7d, 0x49, 0xe6, 0x48,
+		0x84, 0x54, 0x7a, 0xff, 0xb3, 0x72, 0x21, 0xd8,
+		0x7a, 0x5d, 0xb1, 0x4b, 0xcc, 0x01, 0x6f, 0x90,
+		0xc6, 0x68, 0x1c, 0x2c, 0xa1, 0xe2, 0x74, 0x40,
+		0x26, 0x9b, 0x57, 0x53, 0xa3, 0x7c, 0x0b, 0x0d,
+		0xcf, 0x05, 0x5d, 0x62, 0x4f, 0x75, 0x06, 0x62,
+		0x1f, 0x26, 0x32, 0xaa, 0x25, 0xcc, 0x26, 0x8d,
+		0xae, 0x01, 0x47, 0xa3, 0x00, 0x42, 0xe2, 0x4c,
+		0xee, 0x29, 0xa2, 0x81, 0xa0, 0xfd, 0xeb, 0xff,
+		0x9a, 0x66, 0x6e, 0x47, 0x5b, 0xab, 0x93, 0x5a,
+		0x02, 0x6d, 0x6f, 0xf2, 0x6e, 0x02, 0x9d, 0xb1,
+		0xab, 0x56, 0xdc, 0x8b, 0x9b, 0x17, 0xa8, 0xfb,
+		0x87, 0x42, 0x7c, 0x91, 0x1e, 0x14, 0xc6, 0x6f,
+		0xdc, 0xf0, 0x27, 0x30, 0xfa, 0x3f, 0xc4, 0xad,
+		0x57, 0x85, 0xd2, 0xc9, 0x32, 0x2c, 0x13, 0xa6,
+		0x04, 0x04, 0x50, 0x05, 0x2f, 0x72, 0xd9, 0x44,
+		0x55, 0x6e, 0x93, 0x40, 0xed, 0x7e, 0xd4, 0x40,
+		0x3e, 0x88, 0x3b, 0x8b, 0xb6, 0xeb, 0xc6, 0x5d,
+		0x9c, 0x99, 0xa1, 0xcf, 0x30, 0xb2, 0xdc, 0x48,
+		0x8a, 0x01, 0xa7, 0x61, 0x77, 0x50, 0x14, 0xf3,
+		0x0c, 0x49, 0x53, 0xb3, 0xb4, 0xb4, 0x28, 0x41,
+		0x4a, 0x2d, 0xd2, 0x4d, 0x2a, 0x30, 0x31, 0x83,
+		0x03, 0x5e, 0xaa, 0xd3, 0xa3, 0xd1, 0xa1, 0xca,
+		0x62, 0xf0, 0xe1, 0xf2, 0xff, 0xf0, 0x19, 0xa6,
+		0xde, 0x22, 0x47, 0xb5, 0x28, 0x7d, 0xf7, 0x07,
+		0x16, 0x0d, 0xb1, 0x55, 0x81, 0x95, 0xe5, 0x1d,
+		0x4d, 0x78, 0xa9, 0x3e, 0xce, 0xe3, 0x1c, 0xf9,
+		0x47, 0xc8, 0xec, 0xc5, 0xc5, 0x93, 0x4c, 0x34,
+		0x20, 0x6b, 0xee, 0x9a, 0xe6, 0x86, 0x57, 0x58,
+		0xd5, 0x58, 0xf1, 0x33, 0x10, 0x29, 0x9e, 0x93,
+		0x2f, 0xf5, 0x90, 0x00, 0x17, 0x67, 0x4f, 0x39,
+		0x18, 0xe1, 0xcf, 0x55, 0x78, 0xbb, 0xe6, 0x29,
+		0x3e, 0x77, 0xd5, 0x48, 0xb7, 0x42, 0x72, 0x53,
+		0x27, 0xfa, 0x5b, 0xe0, 0x36, 0x14, 0x97, 0xb8,
+		0x9b, 0x3c, 0x09, 0x77, 0xc1, 0x0a, 0xe4, 0xa2,
+		0x63, 0xfc, 0xbe, 0x5c, 0x17, 0xcf, 0x01, 0xf5,
+		0x03, 0x0f, 0x17, 0xbc, 0x93, 0xdd, 0x5f, 0xe2,
+		0xf3, 0x08, 0xa8, 0xb1, 0x85, 0xb6, 0x34, 0x3f,
+		0x87, 0x42, 0xa5, 0x42, 0x3b, 0x0e, 0xd6, 0x83,
+		0x6a, 0xfd, 0x5d, 0xc9, 0x67, 0xd5, 0x51, 0xc9,
+		0x2a, 0x4e, 0x91, 0xb0, 0x59, 0xb2, 0x0f, 0xa2,
+		0xe6, 0x47, 0x73, 0xc2, 0xa2, 0xae, 0xbb, 0xc8,
+		0x42, 0xa3, 0x2a, 0x27, 0x29, 0x48, 0x8c, 0x54,
+		0x6c, 0xec, 0x00, 0x2a, 0x42, 0xa3, 0x7a, 0x0f,
+		0x12, 0x66, 0x6b, 0x96, 0xf6, 0xd0, 0x56, 0x4f,
+		0x49, 0x5c, 0x47, 0xec, 0x05, 0x62, 0x54, 0xb2,
+		0x64, 0x5a, 0x69, 0x1f, 0x19, 0xb4, 0x84, 0x5c,
+		0xbe, 0x48, 0x8e, 0xfc, 0x58, 0x21, 0xce, 0xfa,
+		0xaa, 0x84, 0xd2, 0xc1, 0x08, 0xb3, 0x87, 0x0f,
+		0x4f, 0xa3, 0x3a, 0xb6, 0x44, 0xbe, 0x2e, 0x9a,
+		0xdd, 0xb5, 0x44, 0x80, 0xca, 0xf4, 0xc3, 0x6e,
+		0xba, 0x93, 0x77, 0xe0, 0x53, 0xfb, 0x37, 0xfb,
+		0x88, 0xc3, 0x1f, 0x25, 0xde, 0x3e, 0x11, 0xf4,
+		0x89, 0xe7, 0xd1, 0x3b, 0xb4, 0x23, 0xcb, 0x70,
+		0xba, 0x35, 0x97, 0x7c, 0xbe, 0x84, 0x13, 0xcf,
+		0xe0, 0x4d, 0x33, 0x91, 0x71, 0x85, 0xbb, 0x4b,
+		0x97, 0x32, 0x5d, 0xa0, 0xb9, 0x8f, 0xdc, 0x27,
+		0x5a, 0xeb, 0x71, 0xf1, 0xd5, 0x0d, 0x65, 0xb4,
+		0x22, 0x81, 0xde, 0xa7, 0x58, 0x20, 0x0b, 0x18,
+		0x11, 0x76, 0x5c, 0xe6, 0x6a, 0x2c, 0x99, 0x69,
+		0xdc, 0xed, 0x67, 0x08, 0x5d, 0x5e, 0xe9, 0x1e,
+		0x55, 0x70, 0xc1, 0x5a, 0x76, 0x1b, 0x8d, 0x2e,
+		0x0d, 0xf9, 0xcc, 0x30, 0x8c, 0x44, 0x0f, 0x63,
+		0x8c, 0x42, 0x8a, 0x9f, 0x4c, 0xd1, 0x48, 0x28,
+		0x8a, 0xf5, 0x56, 0x2e, 0x23, 0x12, 0xfe, 0x67,
+		0x9a, 0x13, 0x65, 0x75, 0x83, 0xf1, 0x3c, 0x98,
+		0x07, 0x6b, 0xb7, 0x27, 0x5b, 0xf0, 0x70, 0xda,
+		0x30, 0xf8, 0x74, 0x4e, 0x7a, 0x32, 0x84, 0xcc,
+		0x0e, 0xcd, 0x80, 0x8b, 0x82, 0x31, 0x9a, 0x48,
+		0xcf, 0x75, 0x00, 0x1f, 0x4f, 0xe0, 0x8e, 0xa3,
+		0x6a, 0x2c, 0xd4, 0x73, 0x4c, 0x63, 0x7c, 0xa6,
+		0x4d, 0x5e, 0xfd, 0x43, 0x3b, 0x27, 0xe1, 0x5e,
+		0xa3, 0xa9, 0x5c, 0x3b, 0x60, 0xdd, 0xc6, 0x8d,
+		0x5a, 0xf1, 0x3e, 0x89, 0x4b, 0x24, 0xcf, 0x01,
+		0x3a, 0x2d, 0x44, 0xe7, 0xda, 0xe7, 0xa1, 0xac,
+		0x11, 0x05, 0x0c, 0xa9, 0x7a, 0x82, 0x8c, 0x5c,
+		0x29, 0x68, 0x9c, 0x73, 0x13, 0xcc, 0x67, 0x32,
+		0x11, 0x5e, 0xe5, 0xcc, 0x8c, 0xf5, 0xa7, 0x52,
+		0x83, 0x9a, 0x70, 0xef, 0xde, 0x55, 0x9c, 0xc7,
+		0x8a, 0xed, 0xad, 0x28, 0x4a, 0xc5, 0x92, 0x6d,
+		0x8e, 0x47, 0xca, 0xe3, 0xf8, 0x77, 0xb5, 0x26,
+		0x64, 0x84, 0xc2, 0xf1, 0xd7, 0xae, 0x0c, 0xb9,
+		0x39, 0x0f, 0x43, 0x6b, 0xe9, 0xe0, 0x09, 0x4b,
+		0xe5, 0xe3, 0x17, 0xa6, 0x68, 0x69, 0x46, 0xf4,
+		0xf0, 0x68, 0x7f, 0x2f, 0x1c, 0x7e, 0x4c, 0xd2,
+		0xb5, 0xc6, 0x16, 0x85, 0xcf, 0x02, 0x4c, 0x89,
+		0x0b, 0x25, 0xb0, 0xeb, 0xf3, 0x77, 0x08, 0x6a,
+		0x46, 0x5c, 0xf6, 0x2f, 0xf1, 0x24, 0xc3, 0x4d,
+		0x80, 0x60, 0x4d, 0x69, 0x98, 0xde, 0xc7, 0xa1,
+		0xf6, 0x4e, 0x18, 0x0c, 0x2a, 0xb0, 0xb2, 0xe0,
+		0x46, 0xe7, 0x49, 0x37, 0xc8, 0x5a, 0x23, 0x24,
+		0xe3, 0x0f, 0xcc, 0x92, 0xb4, 0x8d, 0xdc, 0x9e
+};
+
+static const uint8_t AES_CBC_ciphertext_1280B[] = {
+		0x91, 0x99, 0x5e, 0x9e, 0x84, 0xff, 0x59, 0x45,
+		0xc1, 0xf4, 0xbc, 0x9c, 0xb9, 0x30, 0x6c, 0x51,
+		0x73, 0x52, 0xb4, 0x44, 0x09, 0x79, 0xe2, 0x89,
+		0x75, 0xeb, 0x54, 0x26, 0xce, 0xd8, 0x24, 0x98,
+		0xaa, 0xf8, 0x13, 0x16, 0x68, 0x58, 0xc4, 0x82,
+		0x0e, 0x31, 0xd3, 0x6a, 0x13, 0x58, 0x31, 0xe9,
+		0x3a, 0xc1, 0x8b, 0xc5, 0x3f, 0x50, 0x42, 0xd1,
+		0x93, 0xe4, 0x9b, 0x65, 0x2b, 0xf4, 0x1d, 0x9e,
+		0x2d, 0xdb, 0x48, 0xef, 0x9a, 0x01, 0x68, 0xb6,
+		0xea, 0x7a, 0x2b, 0xad, 0xfe, 0x77, 0x44, 0x7e,
+		0x5a, 0xc5, 0x64, 0xb4, 0xfe, 0x5c, 0x80, 0xf3,
+		0x20, 0x7e, 0xaf, 0x5b, 0xf8, 0xd1, 0x38, 0xa0,
+		0x8d, 0x09, 0x77, 0x06, 0xfe, 0xf5, 0xf4, 0xe4,
+		0xee, 0xb8, 0x95, 0x27, 0xed, 0x07, 0xb8, 0xaa,
+		0x25, 0xb4, 0xe1, 0x4c, 0xeb, 0x3f, 0xdb, 0x39,
+		0x66, 0x28, 0x1b, 0x60, 0x42, 0x8b, 0x99, 0xd9,
+		0x49, 0xd6, 0x8c, 0xa4, 0x9d, 0xd8, 0x93, 0x58,
+		0x8f, 0xfa, 0xd3, 0xf7, 0x37, 0x9c, 0x88, 0xab,
+		0x16, 0x50, 0xfe, 0x01, 0x1f, 0x88, 0x48, 0xbe,
+		0x21, 0xa9, 0x90, 0x9e, 0x73, 0xe9, 0x82, 0xf7,
+		0xbf, 0x4b, 0x43, 0xf4, 0xbf, 0x22, 0x3c, 0x45,
+		0x47, 0x95, 0x5b, 0x49, 0x71, 0x07, 0x1c, 0x8b,
+		0x49, 0xa4, 0xa3, 0x49, 0xc4, 0x5f, 0xb1, 0xf5,
+		0xe3, 0x6b, 0xf1, 0xdc, 0xea, 0x92, 0x7b, 0x29,
+		0x40, 0xc9, 0x39, 0x5f, 0xdb, 0xbd, 0xf3, 0x6a,
+		0x09, 0x9b, 0x2a, 0x5e, 0xc7, 0x0b, 0x25, 0x94,
+		0x55, 0x71, 0x9c, 0x7e, 0x0e, 0xb4, 0x08, 0x12,
+		0x8c, 0x6e, 0x77, 0xb8, 0x29, 0xf1, 0xc6, 0x71,
+		0x04, 0x40, 0x77, 0x18, 0x3f, 0x01, 0x09, 0x9c,
+		0x23, 0x2b, 0x5d, 0x2a, 0x88, 0x20, 0x23, 0x59,
+		0x74, 0x2a, 0x67, 0x8f, 0xb7, 0xba, 0x38, 0x9f,
+		0x0f, 0xcf, 0x94, 0xdf, 0xe1, 0x8f, 0x35, 0x5e,
+		0x34, 0x0c, 0x32, 0x92, 0x2b, 0x23, 0x81, 0xf4,
+		0x73, 0xa0, 0x5a, 0x2a, 0xbd, 0xa6, 0x6b, 0xae,
+		0x43, 0xe2, 0xdc, 0x01, 0xc1, 0xc6, 0xc3, 0x04,
+		0x06, 0xbb, 0xb0, 0x89, 0xb3, 0x4e, 0xbd, 0x81,
+		0x1b, 0x03, 0x63, 0x93, 0xed, 0x4e, 0xf6, 0xe5,
+		0x94, 0x6f, 0xd6, 0xf3, 0x20, 0xf3, 0xbc, 0x30,
+		0xc5, 0xd6, 0xbe, 0x1c, 0x05, 0x34, 0x26, 0x4d,
+		0x46, 0x5e, 0x56, 0x63, 0xfb, 0xdb, 0xcd, 0xed,
+		0xb0, 0x7f, 0x83, 0x94, 0x55, 0x54, 0x2f, 0xab,
+		0xc9, 0xb7, 0x16, 0x4f, 0x9e, 0x93, 0x25, 0xd7,
+		0x9f, 0x39, 0x2b, 0x63, 0xcf, 0x1e, 0xa3, 0x0e,
+		0x28, 0x47, 0x8a, 0x5f, 0x40, 0x02, 0x89, 0x1f,
+		0x83, 0xe7, 0x87, 0xd1, 0x90, 0x17, 0xb8, 0x27,
+		0x64, 0xe1, 0xe1, 0x48, 0x5a, 0x55, 0x74, 0x99,
+		0x27, 0x9d, 0x05, 0x67, 0xda, 0x70, 0x12, 0x8f,
+		0x94, 0x96, 0xfd, 0x36, 0xa4, 0x1d, 0x22, 0xe5,
+		0x0b, 0xe5, 0x2f, 0x38, 0x55, 0xa3, 0x5d, 0x0b,
+		0xcf, 0xd4, 0xa9, 0xb8, 0xd6, 0x9a, 0x16, 0x2e,
+		0x6c, 0x4a, 0x25, 0x51, 0x7a, 0x09, 0x48, 0xdd,
+		0xf0, 0xa3, 0x5b, 0x08, 0x1e, 0x2f, 0x03, 0x91,
+		0x80, 0xe8, 0x0f, 0xe9, 0x5a, 0x2f, 0x90, 0xd3,
+		0x64, 0xed, 0xd7, 0x51, 0x17, 0x66, 0x53, 0x40,
+		0x43, 0x74, 0xef, 0x0a, 0x0d, 0x49, 0x41, 0xf2,
+		0x67, 0x6e, 0xea, 0x14, 0xc8, 0x74, 0xd6, 0xa9,
+		0xb9, 0x6a, 0xe3, 0xec, 0x7d, 0xe8, 0x6a, 0x21,
+		0x3a, 0x52, 0x42, 0xfe, 0x9a, 0x15, 0x6d, 0x60,
+		0x64, 0x88, 0xc5, 0xb2, 0x8b, 0x15, 0x2c, 0xff,
+		0xe2, 0x35, 0xc3, 0xee, 0x9f, 0xcd, 0x82, 0xd9,
+		0x14, 0x35, 0x2a, 0xb7, 0xf5, 0x2f, 0x7b, 0xbc,
+		0x01, 0xfd, 0xa8, 0xe0, 0x21, 0x4e, 0x73, 0xf9,
+		0xf2, 0xb0, 0x79, 0xc9, 0x10, 0x52, 0x8f, 0xa8,
+		0x3e, 0x3b, 0xbe, 0xc5, 0xde, 0xf6, 0x53, 0xe3,
+		0x1c, 0x25, 0x3a, 0x1f, 0x13, 0xbf, 0x13, 0xbb,
+		0x94, 0xc2, 0x97, 0x43, 0x64, 0x47, 0x8f, 0x76,
+		0xd7, 0xaa, 0xeb, 0xa4, 0x03, 0x50, 0x0c, 0x10,
+		0x50, 0xd8, 0xf7, 0x75, 0x52, 0x42, 0xe2, 0x94,
+		0x67, 0xf4, 0x60, 0xfb, 0x21, 0x9b, 0x7a, 0x05,
+		0x50, 0x7c, 0x1b, 0x4a, 0x8b, 0x29, 0xe1, 0xac,
+		0xd7, 0x99, 0xfd, 0x0d, 0x65, 0x92, 0xcd, 0x23,
+		0xa7, 0x35, 0x8e, 0x13, 0xf2, 0xe4, 0x10, 0x74,
+		0xc6, 0x4f, 0x19, 0xf7, 0x01, 0x0b, 0x46, 0xab,
+		0xef, 0x8d, 0x4a, 0x4a, 0xfa, 0xda, 0xf3, 0xfb,
+		0x40, 0x28, 0x88, 0xa2, 0x65, 0x98, 0x4d, 0x88,
+		0xc7, 0xbf, 0x00, 0xc8, 0xd0, 0x91, 0xcb, 0x89,
+		0x2f, 0xb0, 0x85, 0xfc, 0xa1, 0xc1, 0x9e, 0x83,
+		0x88, 0xad, 0x95, 0xc0, 0x31, 0xa0, 0xad, 0xa2,
+		0x42, 0xb5, 0xe7, 0x55, 0xd4, 0x93, 0x5a, 0x74,
+		0x4e, 0x41, 0xc3, 0xcf, 0x96, 0x83, 0x46, 0xa1,
+		0xb7, 0x5b, 0xb1, 0x34, 0x67, 0x4e, 0xb1, 0xd7,
+		0x40, 0x20, 0x72, 0xe9, 0xc8, 0x74, 0xb7, 0xde,
+		0x72, 0x29, 0x77, 0x4c, 0x74, 0x7e, 0xcc, 0x18,
+		0xa5, 0x8d, 0x79, 0x8c, 0xd6, 0x6e, 0xcb, 0xd9,
+		0xe1, 0x61, 0xe7, 0x36, 0xbc, 0x37, 0xea, 0xee,
+		0xd8, 0x3c, 0x5e, 0x7c, 0x47, 0x50, 0xd5, 0xec,
+		0x37, 0xc5, 0x63, 0xc3, 0xc9, 0x99, 0x23, 0x9f,
+		0x64, 0x39, 0xdf, 0x13, 0x96, 0x6d, 0xea, 0x08,
+		0x0c, 0x27, 0x2d, 0xfe, 0x0f, 0xc2, 0xa3, 0x97,
+		0x04, 0x12, 0x66, 0x0d, 0x94, 0xbf, 0xbe, 0x3e,
+		0xb9, 0xcf, 0x8e, 0xc1, 0x9d, 0xb1, 0x64, 0x17,
+		0x54, 0x92, 0x3f, 0x0a, 0x51, 0xc8, 0xf5, 0x82,
+		0x98, 0x73, 0x03, 0xc0, 0x5a, 0x51, 0x01, 0x67,
+		0xb4, 0x01, 0x04, 0x06, 0xbc, 0x37, 0xde, 0x96,
+		0x23, 0x3c, 0xce, 0x98, 0x3f, 0xd6, 0x51, 0x1b,
+		0x01, 0x83, 0x0a, 0x1c, 0xf9, 0xeb, 0x7e, 0x72,
+		0xa9, 0x51, 0x23, 0xc8, 0xd7, 0x2f, 0x12, 0xbc,
+		0x08, 0xac, 0x07, 0xe7, 0xa7, 0xe6, 0x46, 0xae,
+		0x54, 0xa3, 0xc2, 0xf2, 0x05, 0x2d, 0x06, 0x5e,
+		0xfc, 0xe2, 0xa2, 0x23, 0xac, 0x86, 0xf2, 0x54,
+		0x83, 0x4a, 0xb6, 0x48, 0x93, 0xa1, 0x78, 0xc2,
+		0x07, 0xec, 0x82, 0xf0, 0x74, 0xa9, 0x18, 0xe9,
+		0x53, 0x44, 0x49, 0xc2, 0x94, 0xf8, 0x94, 0x92,
+		0x08, 0x3f, 0xbf, 0xa6, 0xe5, 0xc6, 0x03, 0x8a,
+		0xc6, 0x90, 0x48, 0x6c, 0xee, 0xbd, 0x44, 0x92,
+		0x1f, 0x2a, 0xce, 0x1d, 0xb8, 0x31, 0xa2, 0x9d,
+		0x24, 0x93, 0xa8, 0x9f, 0x36, 0x00, 0x04, 0x7b,
+		0xcb, 0x93, 0x59, 0xa1, 0x53, 0xdb, 0x13, 0x7a,
+		0x54, 0xb1, 0x04, 0xdb, 0xce, 0x48, 0x4f, 0xe5,
+		0x2f, 0xcb, 0xdf, 0x8f, 0x50, 0x7c, 0xfc, 0x76,
+		0x80, 0xb4, 0xdc, 0x3b, 0xc8, 0x98, 0x95, 0xf5,
+		0x50, 0xba, 0x70, 0x5a, 0x97, 0xd5, 0xfc, 0x98,
+		0x4d, 0xf3, 0x61, 0x0f, 0xcf, 0xac, 0x49, 0x0a,
+		0xdb, 0xc1, 0x42, 0x8f, 0xb6, 0x29, 0xd5, 0x65,
+		0xef, 0x83, 0xf1, 0x30, 0x4b, 0x84, 0xd0, 0x69,
+		0xde, 0xd2, 0x99, 0xe5, 0xec, 0xd3, 0x90, 0x86,
+		0x39, 0x2a, 0x6e, 0xd5, 0x32, 0xe3, 0x0d, 0x2d,
+		0x01, 0x8b, 0x17, 0x55, 0x1d, 0x65, 0x57, 0xbf,
+		0xd8, 0x75, 0xa4, 0x85, 0xb6, 0x4e, 0x35, 0x14,
+		0x58, 0xe4, 0x89, 0xb8, 0x7a, 0x58, 0x86, 0x0c,
+		0xbd, 0x8b, 0x05, 0x7b, 0x63, 0xc0, 0x86, 0x80,
+		0x33, 0x46, 0xd4, 0x9b, 0xb6, 0x0a, 0xeb, 0x6c,
+		0xae, 0xd6, 0x57, 0x7a, 0xc7, 0x59, 0x33, 0xa0,
+		0xda, 0xa4, 0x12, 0xbf, 0x52, 0x22, 0x05, 0x8d,
+		0xeb, 0xee, 0xd5, 0xec, 0xea, 0x29, 0x9b, 0x76,
+		0x95, 0x50, 0x6d, 0x99, 0xe1, 0x45, 0x63, 0x09,
+		0x16, 0x5f, 0xb0, 0xf2, 0x5b, 0x08, 0x33, 0xdd,
+		0x8f, 0xb7, 0x60, 0x7a, 0x8e, 0xc6, 0xfc, 0xac,
+		0xa9, 0x56, 0x2c, 0xa9, 0x8b, 0x74, 0x33, 0xad,
+		0x2a, 0x7e, 0x96, 0xb6, 0xba, 0x22, 0x28, 0xcf,
+		0x4d, 0x96, 0xb7, 0xd1, 0xfa, 0x99, 0x4a, 0x61,
+		0xe6, 0x84, 0xd1, 0x94, 0xca, 0xf5, 0x86, 0xb0,
+		0xba, 0x34, 0x7a, 0x04, 0xcc, 0xd4, 0x81, 0xcd,
+		0xd9, 0x86, 0xb6, 0xe0, 0x5a, 0x6f, 0x9b, 0x99,
+		0xf0, 0xdf, 0x49, 0xae, 0x6d, 0xc2, 0x54, 0x67,
+		0xe0, 0xb4, 0x34, 0x2d, 0x1c, 0x46, 0xdf, 0x73,
+		0x3b, 0x45, 0x43, 0xe7, 0x1f, 0xa3, 0x36, 0x35,
+		0x25, 0x33, 0xd9, 0xc0, 0x54, 0x38, 0x6e, 0x6b,
+		0x80, 0xcf, 0x50, 0xa4, 0xb6, 0x21, 0x17, 0xfd,
+		0x9b, 0x5c, 0x36, 0xca, 0xcc, 0x73, 0x73, 0xad,
+		0xe0, 0x57, 0x77, 0x90, 0x0e, 0x7f, 0x0f, 0x87,
+		0x7f, 0xdb, 0x73, 0xbf, 0xda, 0xc2, 0xb3, 0x05,
+		0x22, 0x06, 0xf5, 0xa3, 0xfc, 0x1e, 0x8f, 0xda,
+		0xcf, 0x49, 0xd6, 0xb3, 0x66, 0x2c, 0xb5, 0x00,
+		0xaf, 0x85, 0x6e, 0xb8, 0x5b, 0x8c, 0xa1, 0xa4,
+		0x21, 0xce, 0x40, 0xf3, 0x98, 0xac, 0xec, 0x88,
+		0x62, 0x43, 0x2a, 0xac, 0xca, 0xcf, 0xb9, 0x30,
+		0xeb, 0xfc, 0xef, 0xf0, 0x6e, 0x64, 0x6d, 0xe7,
+		0x54, 0x88, 0x6b, 0x22, 0x29, 0xbe, 0xa5, 0x8c,
+		0x31, 0x23, 0x3b, 0x4a, 0x80, 0x37, 0xe6, 0xd0,
+		0x05, 0xfc, 0x10, 0x0e, 0xdd, 0xbb, 0x00, 0xc5,
+		0x07, 0x20, 0x59, 0xd3, 0x41, 0x17, 0x86, 0x46,
+		0xab, 0x68, 0xf6, 0x48, 0x3c, 0xea, 0x5a, 0x06,
+		0x30, 0x21, 0x19, 0xed, 0x74, 0xbe, 0x0b, 0x97,
+		0xee, 0x91, 0x35, 0x94, 0x1f, 0xcb, 0x68, 0x7f,
+		0xe4, 0x48, 0xb0, 0x16, 0xfb, 0xf0, 0x74, 0xdb,
+		0x06, 0x59, 0x2e, 0x5a, 0x9c, 0xce, 0x8f, 0x7d,
+		0xba, 0x48, 0xd5, 0x3f, 0x5c, 0xb0, 0xc2, 0x33,
+		0x48, 0x60, 0x17, 0x08, 0x85, 0xba, 0xff, 0xb9,
+		0x34, 0x0a, 0x3d, 0x8f, 0x21, 0x13, 0x12, 0x1b
+};
+
+static const uint8_t AES_CBC_ciphertext_1536B[] = {
+		0x89, 0x93, 0x05, 0x99, 0xa9, 0xed, 0xea, 0x62,
+		0xc9, 0xda, 0x51, 0x15, 0xce, 0x42, 0x91, 0xc3,
+		0x80, 0xc8, 0x03, 0x88, 0xc2, 0x63, 0xda, 0x53,
+		0x1a, 0xf3, 0xeb, 0xd5, 0xba, 0x6f, 0x23, 0xb2,
+		0xed, 0x8f, 0x89, 0xb1, 0xb3, 0xca, 0x90, 0x7a,
+		0xdd, 0x3f, 0xf6, 0xca, 0x86, 0x58, 0x54, 0xbc,
+		0xab, 0x0f, 0xf4, 0xab, 0x6d, 0x5d, 0x42, 0xd0,
+		0x17, 0x49, 0x17, 0xd1, 0x93, 0xea, 0xe8, 0x22,
+		0xc1, 0x34, 0x9f, 0x3a, 0x3b, 0xaa, 0xe9, 0x1b,
+		0x93, 0xff, 0x6b, 0x68, 0xba, 0xe6, 0xd2, 0x39,
+		0x3d, 0x55, 0x34, 0x8f, 0x98, 0x86, 0xb4, 0xd8,
+		0x7c, 0x0d, 0x3e, 0x01, 0x63, 0x04, 0x01, 0xff,
+		0x16, 0x0f, 0x51, 0x5f, 0x73, 0x53, 0xf0, 0x3a,
+		0x38, 0xb4, 0x4d, 0x8d, 0xaf, 0xa3, 0xca, 0x2f,
+		0x6f, 0xdf, 0xc0, 0x41, 0x6c, 0x48, 0x60, 0x1a,
+		0xe4, 0xe7, 0x8a, 0x65, 0x6f, 0x8d, 0xd7, 0xe1,
+		0x10, 0xab, 0x78, 0x5b, 0xb9, 0x69, 0x1f, 0xe0,
+		0x5c, 0xf1, 0x19, 0x12, 0x21, 0xc7, 0x51, 0xbc,
+		0x61, 0x5f, 0xc0, 0x36, 0x17, 0xc0, 0x28, 0xd9,
+		0x51, 0xcb, 0x43, 0xd9, 0xfa, 0xd1, 0xad, 0x79,
+		0x69, 0x86, 0x49, 0xc5, 0xe5, 0x69, 0x27, 0xce,
+		0x22, 0xd0, 0xe1, 0x6a, 0xf9, 0x02, 0xca, 0x6c,
+		0x34, 0xc7, 0xb8, 0x02, 0xc1, 0x38, 0x7f, 0xd5,
+		0x15, 0xf5, 0xd6, 0xeb, 0xf9, 0x30, 0x40, 0x43,
+		0xea, 0x87, 0xde, 0x35, 0xf6, 0x83, 0x59, 0x09,
+		0x68, 0x62, 0x00, 0x87, 0xb8, 0xe7, 0xca, 0x05,
+		0x0f, 0xac, 0x42, 0x58, 0x45, 0xaa, 0xc9, 0x9b,
+		0xfd, 0x2a, 0xda, 0x65, 0x33, 0x93, 0x9d, 0xc6,
+		0x93, 0x8d, 0xe2, 0xc5, 0x71, 0xc1, 0x5c, 0x13,
+		0xde, 0x7b, 0xd4, 0xb9, 0x4c, 0x35, 0x61, 0x85,
+		0x90, 0x78, 0xf7, 0x81, 0x98, 0x45, 0x99, 0x24,
+		0x58, 0x73, 0x28, 0xf8, 0x31, 0xab, 0x54, 0x2e,
+		0xc0, 0x38, 0x77, 0x25, 0x5c, 0x06, 0x9c, 0xc3,
+		0x69, 0x21, 0x92, 0x76, 0xe1, 0x16, 0xdc, 0xa9,
+		0xee, 0xb6, 0x80, 0x66, 0x43, 0x11, 0x24, 0xb3,
+		0x07, 0x17, 0x89, 0x0f, 0xcb, 0xe0, 0x60, 0xa8,
+		0x9d, 0x06, 0x4b, 0x6e, 0x72, 0xb7, 0xbc, 0x4f,
+		0xb8, 0xc0, 0x80, 0xa2, 0xfb, 0x46, 0x5b, 0x8f,
+		0x11, 0x01, 0x92, 0x9d, 0x37, 0x09, 0x98, 0xc8,
+		0x0a, 0x46, 0xae, 0x12, 0xac, 0x61, 0x3f, 0xe7,
+		0x41, 0x1a, 0xaa, 0x2e, 0xdc, 0xd7, 0x2a, 0x47,
+		0xee, 0xdf, 0x08, 0xd1, 0xff, 0xea, 0x13, 0xc6,
+		0x05, 0xdb, 0x29, 0xcc, 0x03, 0xba, 0x7b, 0x6d,
+		0x40, 0xc1, 0xc9, 0x76, 0x75, 0x03, 0x7a, 0x71,
+		0xc9, 0x5f, 0xd9, 0xe0, 0x61, 0x69, 0x36, 0x8f,
+		0xb2, 0xbc, 0x28, 0xf3, 0x90, 0x71, 0xda, 0x5f,
+		0x08, 0xd5, 0x0d, 0xc1, 0xe6, 0xbd, 0x2b, 0xc6,
+		0x6c, 0x42, 0xfd, 0xbf, 0x10, 0xe8, 0x5f, 0x87,
+		0x3d, 0x21, 0x42, 0x85, 0x01, 0x0a, 0xbf, 0x8e,
+		0x49, 0xd3, 0x9c, 0x89, 0x3b, 0xea, 0xe1, 0xbf,
+		0xe9, 0x9b, 0x5e, 0x0e, 0xb8, 0xeb, 0xcd, 0x3a,
+		0xf6, 0x29, 0x41, 0x35, 0xdd, 0x9b, 0x13, 0x24,
+		0xe0, 0x1d, 0x8a, 0xcb, 0x20, 0xf8, 0x41, 0x51,
+		0x3e, 0x23, 0x8c, 0x67, 0x98, 0x39, 0x53, 0x77,
+		0x2a, 0x68, 0xf4, 0x3c, 0x7e, 0xd6, 0xc4, 0x6e,
+		0xf1, 0x53, 0xe9, 0xd8, 0x5c, 0xc1, 0xa9, 0x38,
+		0x6f, 0x5e, 0xe4, 0xd4, 0x29, 0x1c, 0x6c, 0xee,
+		0x2f, 0xea, 0xde, 0x61, 0x71, 0x5a, 0xea, 0xce,
+		0x23, 0x6e, 0x1b, 0x16, 0x43, 0xb7, 0xc0, 0xe3,
+		0x87, 0xa1, 0x95, 0x1e, 0x97, 0x4d, 0xea, 0xa6,
+		0xf7, 0x25, 0xac, 0x82, 0x2a, 0xd3, 0xa6, 0x99,
+		0x75, 0xdd, 0xc1, 0x55, 0x32, 0x6b, 0xea, 0x33,
+		0x88, 0xce, 0x06, 0xac, 0x15, 0x39, 0x19, 0xa3,
+		0x59, 0xaf, 0x7a, 0x1f, 0xd9, 0x72, 0x5e, 0xf7,
+		0x4c, 0xf3, 0x5d, 0x6b, 0xf2, 0x16, 0x92, 0xa8,
+		0x9e, 0x3d, 0xd4, 0x4c, 0x72, 0x55, 0x4e, 0x4a,
+		0xf7, 0x8b, 0x2f, 0x67, 0x5a, 0x90, 0xb7, 0xcf,
+		0x16, 0xd3, 0x7b, 0x5a, 0x9a, 0xc8, 0x9f, 0xbf,
+		0x01, 0x76, 0x3b, 0x86, 0x2c, 0x2a, 0x78, 0x10,
+		0x70, 0x05, 0x38, 0xf9, 0xdd, 0x2a, 0x1d, 0x00,
+		0x25, 0xb7, 0x10, 0xac, 0x3b, 0x3c, 0x4d, 0x3c,
+		0x01, 0x68, 0x3c, 0x5a, 0x29, 0xc2, 0xa0, 0x1b,
+		0x95, 0x67, 0xf9, 0x0a, 0x60, 0xb7, 0x11, 0x9c,
+		0x40, 0x45, 0xd7, 0xb0, 0xda, 0x49, 0x87, 0xcd,
+		0xb0, 0x9b, 0x61, 0x8c, 0xf4, 0x0d, 0x94, 0x1d,
+		0x79, 0x66, 0x13, 0x0b, 0xc6, 0x6b, 0x19, 0xee,
+		0xa0, 0x6b, 0x64, 0x7d, 0xc4, 0xff, 0x98, 0x72,
+		0x60, 0xab, 0x7f, 0x0f, 0x4d, 0x5d, 0x6b, 0xc3,
+		0xba, 0x5e, 0x0d, 0x04, 0xd9, 0x59, 0x17, 0xd0,
+		0x64, 0xbe, 0xfb, 0x58, 0xfc, 0xed, 0x18, 0xf6,
+		0xac, 0x19, 0xa4, 0xfd, 0x16, 0x59, 0x80, 0x58,
+		0xb8, 0x0f, 0x79, 0x24, 0x60, 0x18, 0x62, 0xa9,
+		0xa3, 0xa0, 0xe8, 0x81, 0xd6, 0xec, 0x5b, 0xfe,
+		0x5b, 0xb8, 0xa4, 0x00, 0xa9, 0xd0, 0x90, 0x17,
+		0xe5, 0x50, 0x3d, 0x2b, 0x12, 0x6e, 0x2a, 0x13,
+		0x65, 0x7c, 0xdf, 0xdf, 0xa7, 0xdd, 0x9f, 0x78,
+		0x5f, 0x8f, 0x4e, 0x90, 0xa6, 0x10, 0xe4, 0x7b,
+		0x68, 0x6b, 0xfd, 0xa9, 0x6d, 0x47, 0xfa, 0xec,
+		0x42, 0x35, 0x07, 0x12, 0x3e, 0x78, 0x23, 0x15,
+		0xff, 0xe2, 0x65, 0xc7, 0x47, 0x89, 0x2f, 0x97,
+		0x7c, 0xd7, 0x6b, 0x69, 0x35, 0x79, 0x6f, 0x85,
+		0xb4, 0xa9, 0x75, 0x04, 0x32, 0x9a, 0xfe, 0xf0,
+		0xce, 0xe3, 0xf1, 0xab, 0x15, 0x47, 0xe4, 0x9c,
+		0xc1, 0x48, 0x32, 0x3c, 0xbe, 0x44, 0x72, 0xc9,
+		0xaa, 0x50, 0x37, 0xa6, 0xbe, 0x41, 0xcf, 0xe8,
+		0x17, 0x4e, 0x37, 0xbe, 0xf1, 0x34, 0x2c, 0xd9,
+		0x60, 0x48, 0x09, 0xa5, 0x26, 0x00, 0x31, 0x77,
+		0x4e, 0xac, 0x7c, 0x89, 0x75, 0xe3, 0xde, 0x26,
+		0x4c, 0x32, 0x54, 0x27, 0x8e, 0x92, 0x26, 0x42,
+		0x85, 0x76, 0x01, 0x76, 0x62, 0x4c, 0x29, 0xe9,
+		0x38, 0x05, 0x51, 0x54, 0x97, 0xa3, 0x03, 0x59,
+		0x5e, 0xec, 0x0c, 0xe4, 0x96, 0xb7, 0x15, 0xa8,
+		0x41, 0x06, 0x2b, 0x78, 0x95, 0x24, 0xf6, 0x32,
+		0xc5, 0xec, 0xd7, 0x89, 0x28, 0x1e, 0xec, 0xb1,
+		0xc7, 0x21, 0x0c, 0xd3, 0x80, 0x7c, 0x5a, 0xe6,
+		0xb1, 0x3a, 0x52, 0x33, 0x84, 0x4e, 0x32, 0x6e,
+		0x7a, 0xf6, 0x43, 0x15, 0x5b, 0xa6, 0xba, 0xeb,
+		0xa8, 0xe4, 0xff, 0x4f, 0xbd, 0xbd, 0xa8, 0x5e,
+		0xbe, 0x27, 0xaf, 0xc5, 0xf7, 0x9e, 0xdf, 0x48,
+		0x22, 0xca, 0x6a, 0x0b, 0x3c, 0xd7, 0xe0, 0xdc,
+		0xf3, 0x71, 0x08, 0xdc, 0x28, 0x13, 0x08, 0xf2,
+		0x08, 0x1d, 0x9d, 0x7b, 0xd9, 0xde, 0x6f, 0xe6,
+		0xe8, 0x88, 0x18, 0xc2, 0xcd, 0x93, 0xc5, 0x38,
+		0x21, 0x68, 0x4c, 0x9a, 0xfb, 0xb6, 0x18, 0x16,
+		0x73, 0x2c, 0x1d, 0x6f, 0x95, 0xfb, 0x65, 0x4f,
+		0x7c, 0xec, 0x8d, 0x6c, 0xa8, 0xc0, 0x55, 0x28,
+		0xc6, 0xc3, 0xea, 0xeb, 0x05, 0xf5, 0x65, 0xeb,
+		0x53, 0xe1, 0x54, 0xef, 0xb8, 0x64, 0x98, 0x2d,
+		0x98, 0x9e, 0xc8, 0xfe, 0xa2, 0x07, 0x30, 0xf7,
+		0xf7, 0xae, 0xdb, 0x32, 0xf8, 0x71, 0x9d, 0x06,
+		0xdf, 0x9b, 0xda, 0x61, 0x7d, 0xdb, 0xae, 0x06,
+		0x24, 0x63, 0x74, 0xb6, 0xf3, 0x1b, 0x66, 0x09,
+		0x60, 0xff, 0x2b, 0x29, 0xf5, 0xa9, 0x9d, 0x61,
+		0x5d, 0x55, 0x10, 0x82, 0x21, 0xbb, 0x64, 0x0d,
+		0xef, 0x5c, 0xe3, 0x30, 0x1b, 0x60, 0x1e, 0x5b,
+		0xfe, 0x6c, 0xf5, 0x15, 0xa3, 0x86, 0x27, 0x58,
+		0x46, 0x00, 0x20, 0xcb, 0x86, 0x9a, 0x52, 0x29,
+		0x20, 0x68, 0x4d, 0x67, 0x88, 0x70, 0xc2, 0x31,
+		0xd8, 0xbb, 0xa5, 0xa7, 0x88, 0x7f, 0x66, 0xbc,
+		0xaa, 0x0f, 0xe1, 0x78, 0x7b, 0x97, 0x3c, 0xb7,
+		0xd7, 0xd8, 0x04, 0xe0, 0x09, 0x60, 0xc8, 0xd0,
+		0x9e, 0xe5, 0x6b, 0x31, 0x7f, 0x88, 0xfe, 0xc3,
+		0xfd, 0x89, 0xec, 0x76, 0x4b, 0xb3, 0xa7, 0x37,
+		0x03, 0xb7, 0xc6, 0x10, 0x7c, 0x9d, 0x0c, 0x75,
+		0xd3, 0x08, 0x14, 0x94, 0x03, 0x42, 0x25, 0x26,
+		0x85, 0xf7, 0xf0, 0x90, 0x06, 0x3e, 0x6f, 0x60,
+		0x52, 0x55, 0xd5, 0x0f, 0x79, 0x64, 0x69, 0x69,
+		0x46, 0xf9, 0x7f, 0x7f, 0x03, 0xf1, 0x1f, 0xdb,
+		0x39, 0x05, 0xba, 0x4a, 0x8f, 0x17, 0xe7, 0xba,
+		0xe2, 0x07, 0x7c, 0x1d, 0x9e, 0xbc, 0x94, 0xc0,
+		0x61, 0x59, 0x8e, 0x72, 0xaf, 0xfc, 0x99, 0xe4,
+		0xd5, 0xa8, 0xee, 0x0a, 0x48, 0x2d, 0x82, 0x8b,
+		0x34, 0x54, 0x8a, 0xce, 0xc7, 0xfa, 0xdd, 0xba,
+		0x54, 0xdf, 0xb3, 0x30, 0x33, 0x73, 0x2e, 0xd5,
+		0x52, 0xab, 0x49, 0x91, 0x4e, 0x0a, 0xd6, 0x2f,
+		0x67, 0xe4, 0xdd, 0x64, 0x48, 0x16, 0xd9, 0x85,
+		0xaa, 0x52, 0xa5, 0x0b, 0xd3, 0xb4, 0x2d, 0x77,
+		0x5e, 0x52, 0x77, 0x17, 0xcf, 0xbe, 0x88, 0x04,
+		0x01, 0x52, 0xe2, 0xf1, 0x46, 0xe2, 0x91, 0x30,
+		0x65, 0xcf, 0xc0, 0x65, 0x45, 0xc3, 0x7e, 0xf4,
+		0x2e, 0xb5, 0xaf, 0x6f, 0xab, 0x1a, 0xfa, 0x70,
+		0x35, 0xb8, 0x4f, 0x2d, 0x78, 0x90, 0x33, 0xb5,
+		0x9a, 0x67, 0xdb, 0x2f, 0x28, 0x32, 0xb6, 0x54,
+		0xab, 0x4c, 0x6b, 0x85, 0xed, 0x6c, 0x3e, 0x05,
+		0x2a, 0xc7, 0x32, 0xe8, 0xf5, 0xa3, 0x7b, 0x4e,
+		0x7b, 0x58, 0x24, 0x73, 0xf7, 0xfd, 0xc7, 0xc8,
+		0x6c, 0x71, 0x68, 0xb1, 0xf6, 0xc5, 0x9e, 0x1e,
+		0xe3, 0x5c, 0x25, 0xc0, 0x5b, 0x3e, 0x59, 0xa1,
+		0x18, 0x5a, 0xe8, 0xb5, 0xd1, 0x44, 0x13, 0xa3,
+		0xe6, 0x05, 0x76, 0xd2, 0x8d, 0x6e, 0x54, 0x68,
+		0x0c, 0xa4, 0x7b, 0x8b, 0xd3, 0x8c, 0x42, 0x13,
+		0x87, 0xda, 0xdf, 0x8f, 0xa5, 0x83, 0x7a, 0x42,
+		0x99, 0xb7, 0xeb, 0xe2, 0x79, 0xe0, 0xdb, 0xda,
+		0x33, 0xa8, 0x50, 0x3a, 0xd7, 0xe7, 0xd3, 0x61,
+		0x18, 0xb8, 0xaa, 0x2d, 0xc8, 0xd8, 0x2c, 0x28,
+		0xe5, 0x97, 0x0a, 0x7c, 0x6c, 0x7f, 0x09, 0xd7,
+		0x88, 0x80, 0xac, 0x12, 0xed, 0xf8, 0xc6, 0xb5,
+		0x2d, 0xd6, 0x63, 0x9b, 0x98, 0x35, 0x26, 0xde,
+		0xf6, 0x31, 0xee, 0x7e, 0xa0, 0xfb, 0x16, 0x98,
+		0xb1, 0x96, 0x1d, 0xee, 0xe3, 0x2f, 0xfb, 0x41,
+		0xdd, 0xea, 0x10, 0x1e, 0x03, 0x89, 0x18, 0xd2,
+		0x47, 0x0c, 0xa0, 0x57, 0xda, 0x76, 0x3a, 0x37,
+		0x2c, 0xe4, 0xf9, 0x77, 0xc8, 0x43, 0x5f, 0xcb,
+		0xd6, 0x85, 0xf7, 0x22, 0xe4, 0x32, 0x25, 0xa8,
+		0xdc, 0x21, 0xc0, 0xf5, 0x95, 0xb2, 0xf8, 0x83,
+		0xf0, 0x65, 0x61, 0x15, 0x48, 0x94, 0xb7, 0x03,
+		0x7f, 0x66, 0xa1, 0x39, 0x1f, 0xdd, 0xce, 0x96,
+		0xfe, 0x58, 0x81, 0x3d, 0x41, 0x11, 0x87, 0x13,
+		0x26, 0x1b, 0x6d, 0xf3, 0xca, 0x2e, 0x2c, 0x76,
+		0xd3, 0x2f, 0x6d, 0x49, 0x70, 0x53, 0x05, 0x96,
+		0xcc, 0x30, 0x2b, 0x83, 0xf2, 0xc6, 0xb2, 0x4b,
+		0x22, 0x13, 0x95, 0x42, 0xeb, 0x56, 0x4d, 0x22,
+		0xe6, 0x43, 0x6f, 0xba, 0xe7, 0x3b, 0xe5, 0x59,
+		0xce, 0x57, 0x88, 0x85, 0xb6, 0xbf, 0x15, 0x37,
+		0xb3, 0x7a, 0x7e, 0xc4, 0xbc, 0x99, 0xfc, 0xe4,
+		0x89, 0x00, 0x68, 0x39, 0xbc, 0x5a, 0xba, 0xab,
+		0x52, 0xab, 0xe6, 0x81, 0xfd, 0x93, 0x62, 0xe9,
+		0xb7, 0x12, 0xd1, 0x18, 0x1a, 0xb9, 0x55, 0x4a,
+		0x0f, 0xae, 0x35, 0x11, 0x04, 0x27, 0xf3, 0x42,
+		0x4e, 0xca, 0xdf, 0x9f, 0x12, 0x62, 0xea, 0x03,
+		0xc0, 0xa9, 0x22, 0x7b, 0x6c, 0x6c, 0xe3, 0xdf,
+		0x16, 0xad, 0x03, 0xc9, 0xfe, 0xa4, 0xdd, 0x4f
+};
+
+static const uint8_t AES_CBC_ciphertext_1792B[] = {
+		0x59, 0xcc, 0xfe, 0x8f, 0xb4, 0x9d, 0x0e, 0xd1,
+		0x85, 0xfc, 0x9b, 0x43, 0xc1, 0xb7, 0x54, 0x67,
+		0x01, 0xef, 0xb8, 0x71, 0x36, 0xdb, 0x50, 0x48,
+		0x7a, 0xea, 0xcf, 0xce, 0xba, 0x30, 0x10, 0x2e,
+		0x96, 0x2b, 0xfd, 0xcf, 0x00, 0xe3, 0x1f, 0xac,
+		0x66, 0x14, 0x30, 0x86, 0x49, 0xdb, 0x01, 0x8b,
+		0x07, 0xdd, 0x00, 0x9d, 0x0d, 0x5c, 0x19, 0x11,
+		0xe8, 0x44, 0x2b, 0x25, 0x70, 0xed, 0x7c, 0x33,
+		0x0d, 0xe3, 0x34, 0x93, 0x63, 0xad, 0x26, 0xb1,
+		0x11, 0x91, 0x34, 0x2e, 0x1d, 0x50, 0xaa, 0xd4,
+		0xef, 0x3a, 0x6d, 0xd7, 0x33, 0x20, 0x0d, 0x3f,
+		0x9b, 0xdd, 0xc3, 0xa5, 0xc5, 0xf1, 0x99, 0xdc,
+		0xea, 0x52, 0xda, 0x55, 0xea, 0xa2, 0x7a, 0xc5,
+		0x78, 0x44, 0x4a, 0x02, 0x33, 0x19, 0x62, 0x37,
+		0xf8, 0x8b, 0xd1, 0x0c, 0x21, 0xdf, 0x40, 0x19,
+		0x81, 0xea, 0xfb, 0x1c, 0xa7, 0xcc, 0x60, 0xfe,
+		0x63, 0x25, 0x8f, 0xf3, 0x73, 0x0f, 0x45, 0xe6,
+		0x6a, 0x18, 0xbf, 0xbe, 0xad, 0x92, 0x2a, 0x1e,
+		0x15, 0x65, 0x6f, 0xef, 0x92, 0xcd, 0x0e, 0x19,
+		0x3d, 0x42, 0xa8, 0xfc, 0x0d, 0x32, 0x58, 0xe0,
+		0x56, 0x9f, 0xd6, 0x9b, 0x8b, 0xec, 0xe0, 0x45,
+		0x4d, 0x7e, 0x73, 0x87, 0xff, 0x74, 0x92, 0x59,
+		0x60, 0x13, 0x93, 0xda, 0xec, 0xbf, 0xfa, 0x20,
+		0xb6, 0xe7, 0xdf, 0xc7, 0x10, 0xf5, 0x79, 0xb4,
+		0xd7, 0xac, 0xaf, 0x2b, 0x37, 0x52, 0x30, 0x1d,
+		0xbe, 0x0f, 0x60, 0x77, 0x3d, 0x03, 0x63, 0xa9,
+		0xae, 0xb1, 0xf3, 0xca, 0xca, 0xb4, 0x21, 0xd7,
+		0x6f, 0x2e, 0x5e, 0x9b, 0x68, 0x53, 0x80, 0xab,
+		0x30, 0x23, 0x0a, 0x72, 0x6b, 0xb1, 0xd8, 0x25,
+		0x5d, 0x3a, 0x62, 0x9b, 0x4f, 0x59, 0x3b, 0x79,
+		0xa8, 0x9e, 0x08, 0x6d, 0x37, 0xb0, 0xfc, 0x42,
+		0x51, 0x25, 0x86, 0xbd, 0x54, 0x5a, 0x95, 0x20,
+		0x6c, 0xac, 0xb9, 0x30, 0x1c, 0x03, 0xc9, 0x49,
+		0x38, 0x55, 0x31, 0x49, 0xed, 0xa9, 0x0e, 0xc3,
+		0x65, 0xb4, 0x68, 0x6b, 0x07, 0x4c, 0x0a, 0xf9,
+		0x21, 0x69, 0x7c, 0x9f, 0x28, 0x80, 0xe9, 0x49,
+		0x22, 0x7c, 0xec, 0x97, 0xf7, 0x70, 0xb4, 0xb8,
+		0x25, 0xe7, 0x80, 0x2c, 0x43, 0x24, 0x8a, 0x2e,
+		0xac, 0xa2, 0x84, 0x20, 0xe7, 0xf4, 0x6b, 0x86,
+		0x37, 0x05, 0xc7, 0x59, 0x04, 0x49, 0x2a, 0x99,
+		0x80, 0x46, 0x32, 0x19, 0xe6, 0x30, 0xce, 0xc0,
+		0xef, 0x6e, 0xec, 0xe5, 0x2f, 0x24, 0xc1, 0x78,
+		0x45, 0x02, 0xd3, 0x64, 0x99, 0xf5, 0xc7, 0xbc,
+		0x8f, 0x8c, 0x75, 0xb1, 0x0a, 0xc8, 0xc3, 0xbd,
+		0x5e, 0x7e, 0xbd, 0x0e, 0xdf, 0x4b, 0x96, 0x6a,
+		0xfd, 0x03, 0xdb, 0xd1, 0x31, 0x1e, 0x27, 0xf9,
+		0xe5, 0x83, 0x9a, 0xfc, 0x13, 0x4c, 0xd3, 0x04,
+		0xdb, 0xdb, 0x3f, 0x35, 0x93, 0x4e, 0x14, 0x6b,
+		0x00, 0x5c, 0xb6, 0x11, 0x50, 0xee, 0x61, 0x5c,
+		0x10, 0x5c, 0xd0, 0x90, 0x02, 0x2e, 0x12, 0xe0,
+		0x50, 0x44, 0xad, 0x75, 0xcd, 0x94, 0xcf, 0x92,
+		0xcb, 0xe3, 0xe8, 0x77, 0x4b, 0xd7, 0x1a, 0x7c,
+		0xdd, 0x6b, 0x49, 0x21, 0x7c, 0xe8, 0x2c, 0x25,
+		0x49, 0x86, 0x1e, 0x54, 0xae, 0xfc, 0x0e, 0x80,
+		0xb1, 0xd5, 0xa5, 0x23, 0xcf, 0xcc, 0x0e, 0x11,
+		0xe2, 0x7c, 0x3c, 0x25, 0x78, 0x64, 0x03, 0xa1,
+		0xdd, 0x9f, 0x74, 0x12, 0x7b, 0x21, 0xb5, 0x73,
+		0x15, 0x3c, 0xed, 0xad, 0x07, 0x62, 0x21, 0x79,
+		0xd4, 0x2f, 0x0d, 0x72, 0xe9, 0x7c, 0x6b, 0x96,
+		0x6e, 0xe5, 0x36, 0x4a, 0xd2, 0x38, 0xe1, 0xff,
+		0x6e, 0x26, 0xa4, 0xac, 0x83, 0x07, 0xe6, 0x67,
+		0x74, 0x6c, 0xec, 0x8b, 0x4b, 0x79, 0x33, 0x50,
+		0x2f, 0x8f, 0xa0, 0x8f, 0xfa, 0x38, 0x6a, 0xa2,
+		0x3a, 0x42, 0x85, 0x15, 0x90, 0xd0, 0xb3, 0x0d,
+		0x8a, 0xe4, 0x60, 0x03, 0xef, 0xf9, 0x65, 0x8a,
+		0x4e, 0x50, 0x8c, 0x65, 0xba, 0x61, 0x16, 0xc3,
+		0x93, 0xb7, 0x75, 0x21, 0x98, 0x25, 0x60, 0x6e,
+		0x3d, 0x68, 0xba, 0x7c, 0xe4, 0xf3, 0xd9, 0x9b,
+		0xfb, 0x7a, 0xed, 0x1f, 0xb3, 0x4b, 0x88, 0x74,
+		0x2c, 0xb8, 0x8c, 0x22, 0x95, 0xce, 0x90, 0xf1,
+		0xdb, 0x80, 0xa6, 0x39, 0xae, 0x82, 0xa1, 0xef,
+		0x75, 0xec, 0xfe, 0xf1, 0xe8, 0x04, 0xfd, 0x99,
+		0x1b, 0x5f, 0x45, 0x87, 0x4f, 0xfa, 0xa2, 0x3e,
+		0x3e, 0xb5, 0x01, 0x4b, 0x46, 0xeb, 0x13, 0x9a,
+		0xe4, 0x7d, 0x03, 0x87, 0xb1, 0x59, 0x91, 0x8e,
+		0x37, 0xd3, 0x16, 0xce, 0xef, 0x4b, 0xe9, 0x46,
+		0x8d, 0x2a, 0x50, 0x2f, 0x41, 0xd3, 0x7b, 0xcf,
+		0xf0, 0xb7, 0x8b, 0x65, 0x0f, 0xa3, 0x27, 0x10,
+		0xe9, 0xa9, 0xe9, 0x2c, 0xbe, 0xbb, 0x82, 0xe3,
+		0x7b, 0x0b, 0x81, 0x3e, 0xa4, 0x6a, 0x4f, 0x3b,
+		0xd5, 0x61, 0xf8, 0x47, 0x04, 0x99, 0x5b, 0xff,
+		0xf3, 0x14, 0x6e, 0x57, 0x5b, 0xbf, 0x1b, 0xb4,
+		0x3f, 0xf9, 0x31, 0xf6, 0x95, 0xd5, 0x10, 0xa9,
+		0x72, 0x28, 0x23, 0xa9, 0x6a, 0xa2, 0xcf, 0x7d,
+		0xe3, 0x18, 0x95, 0xda, 0xbc, 0x6f, 0xe9, 0xd8,
+		0xef, 0x49, 0x3f, 0xd3, 0xef, 0x1f, 0xe1, 0x50,
+		0xe8, 0x8a, 0xc0, 0xce, 0xcc, 0xb7, 0x5e, 0x0e,
+		0x8b, 0x95, 0x80, 0xfd, 0x58, 0x2a, 0x9b, 0xc8,
+		0xb4, 0x17, 0x04, 0x46, 0x74, 0xd4, 0x68, 0x91,
+		0x33, 0xc8, 0x31, 0x15, 0x84, 0x16, 0x35, 0x03,
+		0x64, 0x6d, 0xa9, 0x4e, 0x20, 0xeb, 0xa9, 0x3f,
+		0x21, 0x5e, 0x9b, 0x09, 0xc3, 0x45, 0xf8, 0x7c,
+		0x59, 0x62, 0x29, 0x9a, 0x5c, 0xcf, 0xb4, 0x27,
+		0x5e, 0x13, 0xea, 0xb3, 0xef, 0xd9, 0x01, 0x2a,
+		0x65, 0x5f, 0x14, 0xf4, 0xbf, 0x28, 0x89, 0x3d,
+		0xdd, 0x9d, 0x52, 0xbd, 0x9e, 0x5b, 0x3b, 0xd2,
+		0xc2, 0x81, 0x35, 0xb6, 0xac, 0xdd, 0x27, 0xc3,
+		0x7b, 0x01, 0x5a, 0x6d, 0x4c, 0x5e, 0x2c, 0x30,
+		0xcb, 0x3a, 0xfa, 0xc1, 0xd7, 0x31, 0x67, 0x3e,
+		0x08, 0x6a, 0xe8, 0x8c, 0x75, 0xac, 0x1a, 0x6a,
+		0x52, 0xf7, 0x51, 0xcd, 0x85, 0x3f, 0x3c, 0xa7,
+		0xea, 0xbc, 0xd7, 0x18, 0x9e, 0x27, 0x73, 0xe6,
+		0x2b, 0x58, 0xb6, 0xd2, 0x29, 0x68, 0xd5, 0x8f,
+		0x00, 0x4d, 0x55, 0xf6, 0x61, 0x5a, 0xcc, 0x51,
+		0xa6, 0x5e, 0x85, 0xcb, 0x0b, 0xfd, 0x06, 0xca,
+		0xf5, 0xbf, 0x0d, 0x13, 0x74, 0x78, 0x6d, 0x9e,
+		0x20, 0x11, 0x84, 0x3e, 0x78, 0x17, 0x04, 0x4f,
+		0x64, 0x2c, 0x3b, 0x3e, 0x93, 0x7b, 0x58, 0x33,
+		0x07, 0x52, 0xf7, 0x60, 0x6a, 0xa8, 0x3b, 0x19,
+		0x27, 0x7a, 0x93, 0xc5, 0x53, 0xad, 0xec, 0xf6,
+		0xc8, 0x94, 0xee, 0x92, 0xea, 0xee, 0x7e, 0xea,
+		0xb9, 0x5f, 0xac, 0x59, 0x5d, 0x2e, 0x78, 0x53,
+		0x72, 0x81, 0x92, 0xdd, 0x1c, 0x63, 0xbe, 0x02,
+		0xeb, 0xa8, 0x1b, 0x2a, 0x6e, 0x72, 0xe3, 0x2d,
+		0x84, 0x0d, 0x8a, 0x22, 0xf6, 0xba, 0xab, 0x04,
+		0x8e, 0x04, 0x24, 0xdb, 0xcc, 0xe2, 0x69, 0xeb,
+		0x4e, 0xfa, 0x6b, 0x5b, 0xc8, 0xc0, 0xd9, 0x25,
+		0xcb, 0x40, 0x8d, 0x4b, 0x8e, 0xa0, 0xd4, 0x72,
+		0x98, 0x36, 0x46, 0x3b, 0x4f, 0x5f, 0x96, 0x84,
+		0x03, 0x28, 0x86, 0x4d, 0xa1, 0x8a, 0xd7, 0xb2,
+		0x5b, 0x27, 0x01, 0x80, 0x62, 0x49, 0x56, 0xb9,
+		0xa0, 0xa1, 0xe3, 0x6e, 0x22, 0x2a, 0x5d, 0x03,
+		0x86, 0x40, 0x36, 0x22, 0x5e, 0xd2, 0xe5, 0xc0,
+		0x6b, 0xfa, 0xac, 0x80, 0x4e, 0x09, 0x99, 0xbc,
+		0x2f, 0x9b, 0xcc, 0xf3, 0x4e, 0xf7, 0x99, 0x98,
+		0x11, 0x6e, 0x6f, 0x62, 0x22, 0x6b, 0x92, 0x95,
+		0x3b, 0xc3, 0xd2, 0x8e, 0x0f, 0x07, 0xc2, 0x51,
+		0x5c, 0x4d, 0xb2, 0x6e, 0xc0, 0x27, 0x73, 0xcd,
+		0x57, 0xb7, 0xf0, 0xe9, 0x2e, 0xc8, 0xe2, 0x0c,
+		0xd1, 0xb5, 0x0f, 0xff, 0xf9, 0xec, 0x38, 0xba,
+		0x97, 0xd6, 0x94, 0x9b, 0xd1, 0x79, 0xb6, 0x6a,
+		0x01, 0x17, 0xe4, 0x7e, 0xa6, 0xd5, 0x86, 0x19,
+		0xae, 0xf3, 0xf0, 0x62, 0x73, 0xc0, 0xf0, 0x0a,
+		0x7a, 0x96, 0x93, 0x72, 0x89, 0x7e, 0x25, 0x57,
+		0xf8, 0xf7, 0xd5, 0x1e, 0xe5, 0xac, 0xd6, 0x38,
+		0x4f, 0xe8, 0x81, 0xd1, 0x53, 0x41, 0x07, 0x2d,
+		0x58, 0x34, 0x1c, 0xef, 0x74, 0x2e, 0x61, 0xca,
+		0xd3, 0xeb, 0xd6, 0x93, 0x0a, 0xf2, 0xf2, 0x86,
+		0x9c, 0xe3, 0x7a, 0x52, 0xf5, 0x42, 0xf1, 0x8b,
+		0x10, 0xf2, 0x25, 0x68, 0x7e, 0x61, 0xb1, 0x19,
+		0xcf, 0x8f, 0x5a, 0x53, 0xb7, 0x68, 0x4f, 0x1a,
+		0x71, 0xe9, 0x83, 0x91, 0x3a, 0x78, 0x0f, 0xf7,
+		0xd4, 0x74, 0xf5, 0x06, 0xd2, 0x88, 0xb0, 0x06,
+		0xe5, 0xc0, 0xfb, 0xb3, 0x91, 0xad, 0xc0, 0x84,
+		0x31, 0xf2, 0x3a, 0xcf, 0x63, 0xe6, 0x4a, 0xd3,
+		0x78, 0xbe, 0xde, 0x73, 0x3e, 0x02, 0x8e, 0xb8,
+		0x3a, 0xf6, 0x55, 0xa7, 0xf8, 0x5a, 0xb5, 0x0e,
+		0x0c, 0xc5, 0xe5, 0x66, 0xd5, 0xd2, 0x18, 0xf3,
+		0xef, 0xa5, 0xc9, 0x68, 0x69, 0xe0, 0xcd, 0x00,
+		0x33, 0x99, 0x6e, 0xea, 0xcb, 0x06, 0x7a, 0xe1,
+		0xe1, 0x19, 0x0b, 0xe7, 0x08, 0xcd, 0x09, 0x1b,
+		0x85, 0xec, 0xc4, 0xd4, 0x75, 0xf0, 0xd6, 0xfb,
+		0x84, 0x95, 0x07, 0x44, 0xca, 0xa5, 0x2a, 0x6c,
+		0xc2, 0x00, 0x58, 0x08, 0x87, 0x9e, 0x0a, 0xd4,
+		0x06, 0xe2, 0x91, 0x5f, 0xb7, 0x1b, 0x11, 0xfa,
+		0x85, 0xfc, 0x7c, 0xf2, 0x0f, 0x6e, 0x3c, 0x8a,
+		0xe1, 0x0f, 0xa0, 0x33, 0x84, 0xce, 0x81, 0x4d,
+		0x32, 0x4d, 0xeb, 0x41, 0xcf, 0x5a, 0x05, 0x60,
+		0x47, 0x6c, 0x2a, 0xc4, 0x17, 0xd5, 0x16, 0x3a,
+		0xe4, 0xe7, 0xab, 0x84, 0x94, 0x22, 0xff, 0x56,
+		0xb0, 0x0c, 0x92, 0x6c, 0x19, 0x11, 0x4c, 0xb3,
+		0xed, 0x58, 0x48, 0x84, 0x2a, 0xe2, 0x19, 0x2a,
+		0xe1, 0xc0, 0x56, 0x82, 0x3c, 0x83, 0xb4, 0x58,
+		0x2d, 0xf0, 0xb5, 0x1e, 0x76, 0x85, 0x51, 0xc2,
+		0xe4, 0x95, 0x27, 0x96, 0xd1, 0x90, 0xc3, 0x17,
+		0x75, 0xa1, 0xbb, 0x46, 0x5f, 0xa6, 0xf2, 0xef,
+		0x71, 0x56, 0x92, 0xc5, 0x8a, 0x85, 0x52, 0xe4,
+		0x63, 0x21, 0x6f, 0x55, 0x85, 0x2b, 0x6b, 0x0d,
+		0xc9, 0x92, 0x77, 0x67, 0xe3, 0xff, 0x2a, 0x2b,
+		0x90, 0x01, 0x3d, 0x74, 0x63, 0x04, 0x61, 0x3c,
+		0x8e, 0xf8, 0xfc, 0x04, 0xdd, 0x21, 0x85, 0x92,
+		0x1e, 0x4d, 0x51, 0x8d, 0xb5, 0x6b, 0xf1, 0xda,
+		0x96, 0xf5, 0x8e, 0x3c, 0x38, 0x5a, 0xac, 0x9b,
+		0xba, 0x0c, 0x84, 0x5d, 0x50, 0x12, 0xc7, 0xc5,
+		0x7a, 0xcb, 0xb1, 0xfa, 0x16, 0x93, 0xdf, 0x98,
+		0xda, 0x3f, 0x49, 0xa3, 0x94, 0x78, 0x70, 0xc7,
+		0x0b, 0xb6, 0x91, 0xa6, 0x16, 0x2e, 0xcf, 0xfd,
+		0x51, 0x6a, 0x5b, 0xad, 0x7a, 0xdd, 0xa9, 0x48,
+		0x48, 0xac, 0xd6, 0x45, 0xbc, 0x23, 0x31, 0x1d,
+		0x86, 0x54, 0x8a, 0x7f, 0x04, 0x97, 0x71, 0x9e,
+		0xbc, 0x2e, 0x6b, 0xd9, 0x33, 0xc8, 0x20, 0xc9,
+		0xe0, 0x25, 0x86, 0x59, 0x15, 0xcf, 0x63, 0xe5,
+		0x99, 0xf1, 0x24, 0xf1, 0xba, 0xc4, 0x15, 0x02,
+		0xe2, 0xdb, 0xfe, 0x4a, 0xf8, 0x3b, 0x91, 0x13,
+		0x8d, 0x03, 0x81, 0x9f, 0xb3, 0x3f, 0x04, 0x03,
+		0x58, 0xc0, 0xef, 0x27, 0x82, 0x14, 0xd2, 0x7f,
+		0x93, 0x70, 0xb7, 0xb2, 0x02, 0x21, 0xb3, 0x07,
+		0x7f, 0x1c, 0xef, 0x88, 0xee, 0x29, 0x7a, 0x0b,
+		0x3d, 0x75, 0x5a, 0x93, 0xfe, 0x7f, 0x14, 0xf7,
+		0x4e, 0x4b, 0x7f, 0x21, 0x02, 0xad, 0xf9, 0x43,
+		0x29, 0x1a, 0xe8, 0x1b, 0xf5, 0x32, 0xb2, 0x96,
+		0xe6, 0xe8, 0x96, 0x20, 0x9b, 0x96, 0x8e, 0x7b,
+		0xfe, 0xd8, 0xc9, 0x9c, 0x65, 0x16, 0xd6, 0x68,
+		0x95, 0xf8, 0x22, 0xe2, 0xae, 0x84, 0x03, 0xfd,
+		0x87, 0xa2, 0x72, 0x79, 0x74, 0x95, 0xfa, 0xe1,
+		0xfe, 0xd0, 0x4e, 0x3d, 0x39, 0x2e, 0x67, 0x55,
+		0x71, 0x6c, 0x89, 0x33, 0x49, 0x0c, 0x1b, 0x46,
+		0x92, 0x31, 0x6f, 0xa6, 0xf0, 0x09, 0xbd, 0x2d,
+		0xe2, 0xca, 0xda, 0x18, 0x33, 0xce, 0x67, 0x37,
+		0xfd, 0x6f, 0xcb, 0x9d, 0xbd, 0x42, 0xbc, 0xb2,
+		0x9c, 0x28, 0xcd, 0x65, 0x3c, 0x61, 0xbc, 0xde,
+		0x9d, 0xe1, 0x2a, 0x3e, 0xbf, 0xee, 0x3c, 0xcb,
+		0xb1, 0x50, 0xa9, 0x2c, 0xbe, 0xb5, 0x43, 0xd0,
+		0xec, 0x29, 0xf9, 0x16, 0x6f, 0x31, 0xd9, 0x9b,
+		0x92, 0xb1, 0x32, 0xae, 0x0f, 0xb6, 0x9d, 0x0e,
+		0x25, 0x7f, 0x89, 0x1f, 0x1d, 0x01, 0x68, 0xab,
+		0x3d, 0xd1, 0x74, 0x5b, 0x4c, 0x38, 0x7f, 0x3d,
+		0x33, 0xa5, 0xa2, 0x9f, 0xda, 0x84, 0xa5, 0x82,
+		0x2d, 0x16, 0x66, 0x46, 0x08, 0x30, 0x14, 0x48,
+		0x5e, 0xca, 0xe3, 0xf4, 0x8c, 0xcb, 0x32, 0xc6,
+		0xf1, 0x43, 0x62, 0xc6, 0xef, 0x16, 0xfa, 0x43,
+		0xae, 0x9c, 0x53, 0xe3, 0x49, 0x45, 0x80, 0xfd,
+		0x1d, 0x8c, 0xa9, 0x6d, 0x77, 0x76, 0xaa, 0x40,
+		0xc4, 0x4e, 0x7b, 0x78, 0x6b, 0xe0, 0x1d, 0xce,
+		0x56, 0x3d, 0xf0, 0x11, 0xfe, 0x4f, 0x6a, 0x6d,
+		0x0f, 0x4f, 0x90, 0x38, 0x92, 0x17, 0xfa, 0x56,
+		0x12, 0xa6, 0xa1, 0x0a, 0xea, 0x2f, 0x50, 0xf9,
+		0x60, 0x66, 0x6c, 0x7d, 0x5a, 0x08, 0x8e, 0x3c,
+		0xf3, 0xf0, 0x33, 0x02, 0x11, 0x02, 0xfe, 0x4c,
+		0x56, 0x2b, 0x9f, 0x0c, 0xbd, 0x65, 0x8a, 0x83,
+		0xde, 0x7c, 0x05, 0x26, 0x93, 0x19, 0xcc, 0xf3,
+		0x71, 0x0e, 0xad, 0x2f, 0xb3, 0xc9, 0x38, 0x50,
+		0x64, 0xd5, 0x4c, 0x60, 0x5f, 0x02, 0x13, 0x34,
+		0xc9, 0x75, 0xc4, 0x60, 0xab, 0x2e, 0x17, 0x7d
+};
+
+static const uint8_t AES_CBC_ciphertext_2048B[] = {
+		0x8b, 0x55, 0xbd, 0xfd, 0x2b, 0x35, 0x76, 0x5c,
+		0xd1, 0x90, 0xd7, 0x6a, 0x63, 0x1e, 0x39, 0x71,
+		0x0d, 0x5c, 0xd8, 0x03, 0x00, 0x75, 0xf1, 0x07,
+		0x03, 0x8d, 0x76, 0xeb, 0x3b, 0x00, 0x1e, 0x33,
+		0x88, 0xfc, 0x8f, 0x08, 0x4d, 0x33, 0xf1, 0x3c,
+		0xee, 0xd0, 0x5d, 0x19, 0x8b, 0x3c, 0x50, 0x86,
+		0xfd, 0x8d, 0x58, 0x21, 0xb4, 0xae, 0x0f, 0x81,
+		0xe9, 0x9f, 0xc9, 0xc0, 0x90, 0xf7, 0x04, 0x6f,
+		0x39, 0x1d, 0x8a, 0x3f, 0x8d, 0x32, 0x23, 0xb5,
+		0x1f, 0xcc, 0x8a, 0x12, 0x2d, 0x46, 0x82, 0x5e,
+		0x6a, 0x34, 0x8c, 0xb1, 0x93, 0x70, 0x3b, 0xde,
+		0x55, 0xaf, 0x16, 0x35, 0x99, 0x84, 0xd5, 0x88,
+		0xc9, 0x54, 0xb1, 0xb2, 0xd3, 0xeb, 0x9e, 0x55,
+		0x9a, 0xa9, 0xa7, 0xf5, 0xda, 0x29, 0xcf, 0xe1,
+		0x98, 0x64, 0x45, 0x77, 0xf2, 0x12, 0x69, 0x8f,
+		0x78, 0xd8, 0x82, 0x41, 0xb2, 0x9f, 0xe2, 0x1c,
+		0x63, 0x9b, 0x24, 0x81, 0x67, 0x95, 0xa2, 0xff,
+		0x26, 0x9d, 0x65, 0x48, 0x61, 0x30, 0x66, 0x41,
+		0x68, 0x84, 0xbb, 0x59, 0x14, 0x8e, 0x9a, 0x62,
+		0xb6, 0xca, 0xda, 0xbe, 0x7c, 0x41, 0x52, 0x6e,
+		0x1b, 0x86, 0xbf, 0x08, 0xeb, 0x37, 0x84, 0x60,
+		0xe4, 0xc4, 0x1e, 0xa8, 0x4c, 0x84, 0x60, 0x2f,
+		0x70, 0x90, 0xf2, 0x26, 0xe7, 0x65, 0x0c, 0xc4,
+		0x58, 0x36, 0x8e, 0x4d, 0xdf, 0xff, 0x9a, 0x39,
+		0x93, 0x01, 0xcf, 0x6f, 0x6d, 0xde, 0xef, 0x79,
+		0xb0, 0xce, 0xe2, 0x98, 0xdb, 0x85, 0x8d, 0x62,
+		0x9d, 0xb9, 0x63, 0xfd, 0xf0, 0x35, 0xb5, 0xa9,
+		0x1b, 0xf9, 0xe5, 0xd4, 0x2e, 0x22, 0x2d, 0xcc,
+		0x42, 0xbf, 0x0e, 0x51, 0xf7, 0x15, 0x07, 0x32,
+		0x75, 0x5b, 0x74, 0xbb, 0x00, 0xef, 0xd4, 0x66,
+		0x8b, 0xad, 0x71, 0x53, 0x94, 0xd7, 0x7d, 0x2c,
+		0x40, 0x3e, 0x69, 0xa0, 0x4c, 0x86, 0x5e, 0x06,
+		0xed, 0xdf, 0x22, 0xe2, 0x24, 0x25, 0x4e, 0x9b,
+		0x5f, 0x49, 0x74, 0xba, 0xed, 0xb1, 0xa6, 0xeb,
+		0xae, 0x3f, 0xc6, 0x9e, 0x0b, 0x29, 0x28, 0x9a,
+		0xb6, 0xb2, 0x74, 0x58, 0xec, 0xa6, 0x4a, 0xed,
+		0xe5, 0x10, 0x00, 0x85, 0xe1, 0x63, 0x41, 0x61,
+		0x30, 0x7c, 0x97, 0xcf, 0x75, 0xcf, 0xb6, 0xf3,
+		0xf7, 0xda, 0x35, 0x3f, 0x85, 0x8c, 0x64, 0xca,
+		0xb7, 0xea, 0x7f, 0xe4, 0xa3, 0x4d, 0x30, 0x84,
+		0x8c, 0x9c, 0x80, 0x5a, 0x50, 0xa5, 0x64, 0xae,
+		0x26, 0xd3, 0xb5, 0x01, 0x73, 0x36, 0x8a, 0x92,
+		0x49, 0xc4, 0x1a, 0x94, 0x81, 0x9d, 0xf5, 0x6c,
+		0x50, 0xe1, 0x58, 0x0b, 0x75, 0xdd, 0x6b, 0x6a,
+		0xca, 0x69, 0xea, 0xc3, 0x33, 0x90, 0x9f, 0x3b,
+		0x65, 0x5d, 0x5e, 0xee, 0x31, 0xb7, 0x32, 0xfd,
+		0x56, 0x83, 0xb6, 0xfb, 0xa8, 0x04, 0xfc, 0x1e,
+		0x11, 0xfb, 0x02, 0x23, 0x53, 0x49, 0x45, 0xb1,
+		0x07, 0xfc, 0xba, 0xe7, 0x5f, 0x5d, 0x2d, 0x7f,
+		0x9e, 0x46, 0xba, 0xe9, 0xb0, 0xdb, 0x32, 0x04,
+		0xa4, 0xa7, 0x98, 0xab, 0x91, 0xcd, 0x02, 0x05,
+		0xf5, 0x74, 0x31, 0x98, 0x83, 0x3d, 0x33, 0x11,
+		0x0e, 0xe3, 0x8d, 0xa8, 0xc9, 0x0e, 0xf3, 0xb9,
+		0x47, 0x67, 0xe9, 0x79, 0x2b, 0x34, 0xcd, 0x9b,
+		0x45, 0x75, 0x29, 0xf0, 0xbf, 0xcc, 0xda, 0x3a,
+		0x91, 0xb2, 0x15, 0x27, 0x7a, 0xe5, 0xf5, 0x6a,
+		0x5e, 0xbe, 0x2c, 0x98, 0xe8, 0x40, 0x96, 0x4f,
+		0x8a, 0x09, 0xfd, 0xf6, 0xb2, 0xe7, 0x45, 0xb6,
+		0x08, 0xc1, 0x69, 0xe1, 0xb3, 0xc4, 0x24, 0x34,
+		0x07, 0x85, 0xd5, 0xa9, 0x78, 0xca, 0xfa, 0x4b,
+		0x01, 0x19, 0x4d, 0x95, 0xdc, 0xa5, 0xc1, 0x9c,
+		0xec, 0x27, 0x5b, 0xa6, 0x54, 0x25, 0xbd, 0xc8,
+		0x0a, 0xb7, 0x11, 0xfb, 0x4e, 0xeb, 0x65, 0x2e,
+		0xe1, 0x08, 0x9c, 0x3a, 0x45, 0x44, 0x33, 0xef,
+		0x0d, 0xb9, 0xff, 0x3e, 0x68, 0x9c, 0x61, 0x2b,
+		0x11, 0xb8, 0x5c, 0x47, 0x0f, 0x94, 0xf2, 0xf8,
+		0x0b, 0xbb, 0x99, 0x18, 0x85, 0xa3, 0xba, 0x44,
+		0xf3, 0x79, 0xb3, 0x63, 0x2c, 0x1f, 0x2a, 0x35,
+		0x3b, 0x23, 0x98, 0xab, 0xf4, 0x16, 0x36, 0xf8,
+		0xde, 0x86, 0xa4, 0xd4, 0x75, 0xff, 0x51, 0xf9,
+		0xeb, 0x42, 0x5f, 0x55, 0xe2, 0xbe, 0xd1, 0x5b,
+		0xb5, 0x38, 0xeb, 0xb4, 0x4d, 0xec, 0xec, 0x99,
+		0xe1, 0x39, 0x43, 0xaa, 0x64, 0xf7, 0xc9, 0xd8,
+		0xf2, 0x9a, 0x71, 0x43, 0x39, 0x17, 0xe8, 0xa8,
+		0xa2, 0xe2, 0xa4, 0x2c, 0x18, 0x11, 0x49, 0xdf,
+		0x18, 0xdd, 0x85, 0x6e, 0x65, 0x96, 0xe2, 0xba,
+		0xa1, 0x0a, 0x2c, 0xca, 0xdc, 0x5f, 0xe4, 0xf4,
+		0x35, 0x03, 0xb2, 0xa9, 0xda, 0xcf, 0xb7, 0x6d,
+		0x65, 0x82, 0x82, 0x67, 0x9d, 0x0e, 0xf3, 0xe8,
+		0x85, 0x6c, 0x69, 0xb8, 0x4c, 0xa6, 0xc6, 0x2e,
+		0x40, 0xb5, 0x54, 0x28, 0x95, 0xe4, 0x57, 0xe0,
+		0x5b, 0xf8, 0xde, 0x59, 0xe0, 0xfd, 0x89, 0x48,
+		0xac, 0x56, 0x13, 0x54, 0xb9, 0x1b, 0xf5, 0x59,
+		0x97, 0xb6, 0xb3, 0xe8, 0xac, 0x2d, 0xfc, 0xd2,
+		0xea, 0x57, 0x96, 0x57, 0xa8, 0x26, 0x97, 0x2c,
+		0x01, 0x89, 0x56, 0xea, 0xec, 0x8c, 0x53, 0xd5,
+		0xd7, 0x9e, 0xc9, 0x98, 0x0b, 0xad, 0x03, 0x75,
+		0xa0, 0x6e, 0x98, 0x8b, 0x97, 0x8d, 0x8d, 0x85,
+		0x7d, 0x74, 0xa7, 0x2d, 0xde, 0x67, 0x0c, 0xcd,
+		0x54, 0xb8, 0x15, 0x7b, 0xeb, 0xf5, 0x84, 0xb9,
+		0x78, 0xab, 0xd8, 0x68, 0x91, 0x1f, 0x6a, 0xa6,
+		0x28, 0x22, 0xf7, 0x00, 0x49, 0x00, 0xbe, 0x41,
+		0x71, 0x0a, 0xf5, 0xe7, 0x9f, 0xb4, 0x11, 0x41,
+		0x3f, 0xcd, 0xa9, 0xa9, 0x01, 0x8b, 0x6a, 0xeb,
+		0x54, 0x4c, 0x58, 0x92, 0x68, 0x02, 0x0e, 0xe9,
+		0xed, 0x65, 0x4c, 0xfb, 0x95, 0x48, 0x58, 0xa2,
+		0xaa, 0x57, 0x69, 0x13, 0x82, 0x0c, 0x2c, 0x4b,
+		0x5d, 0x4e, 0x18, 0x30, 0xef, 0x1c, 0xb1, 0x9d,
+		0x05, 0x05, 0x02, 0x1c, 0x97, 0xc9, 0x48, 0xfe,
+		0x5e, 0x7b, 0x77, 0xa3, 0x1f, 0x2a, 0x81, 0x42,
+		0xf0, 0x4b, 0x85, 0x12, 0x9c, 0x1f, 0x44, 0xb1,
+		0x14, 0x91, 0x92, 0x65, 0x77, 0xb1, 0x87, 0xa2,
+		0xfc, 0xa4, 0xe7, 0xd2, 0x9b, 0xf2, 0x17, 0xf0,
+		0x30, 0x1c, 0x8d, 0x33, 0xbc, 0x25, 0x28, 0x48,
+		0xfd, 0x30, 0x79, 0x0a, 0x99, 0x3e, 0xb4, 0x0f,
+		0x1e, 0xa6, 0x68, 0x76, 0x19, 0x76, 0x29, 0xac,
+		0x5d, 0xb8, 0x1e, 0x42, 0xd6, 0x85, 0x04, 0xbf,
+		0x64, 0x1c, 0x2d, 0x53, 0xe9, 0x92, 0x78, 0xf8,
+		0xc3, 0xda, 0x96, 0x92, 0x10, 0x6f, 0x45, 0x85,
+		0xaf, 0x5e, 0xcc, 0xa8, 0xc0, 0xc6, 0x2e, 0x73,
+		0x51, 0x3f, 0x5e, 0xd7, 0x52, 0x33, 0x71, 0x12,
+		0x6d, 0x85, 0xee, 0xea, 0x85, 0xa8, 0x48, 0x2b,
+		0x40, 0x64, 0x6d, 0x28, 0x73, 0x16, 0xd7, 0x82,
+		0xd9, 0x90, 0xed, 0x1f, 0xa7, 0x5c, 0xb1, 0x5c,
+		0x27, 0xb9, 0x67, 0x8b, 0xb4, 0x17, 0x13, 0x83,
+		0x5f, 0x09, 0x72, 0x0a, 0xd7, 0xa0, 0xec, 0x81,
+		0x59, 0x19, 0xb9, 0xa6, 0x5a, 0x37, 0x34, 0x14,
+		0x47, 0xf6, 0xe7, 0x6c, 0xd2, 0x09, 0x10, 0xe7,
+		0xdd, 0xbb, 0x02, 0xd1, 0x28, 0xfa, 0x01, 0x2c,
+		0x93, 0x64, 0x2e, 0x1b, 0x4c, 0x02, 0x52, 0xcb,
+		0x07, 0xa1, 0xb6, 0x46, 0x02, 0x80, 0xd9, 0x8f,
+		0x5c, 0x62, 0xbe, 0x78, 0x9e, 0x75, 0xc4, 0x97,
+		0x91, 0x39, 0x12, 0x65, 0xb9, 0x3b, 0xc2, 0xd1,
+		0xaf, 0xf2, 0x1f, 0x4e, 0x4d, 0xd1, 0xf0, 0x9f,
+		0xb7, 0x12, 0xfd, 0xe8, 0x75, 0x18, 0xc0, 0x9d,
+		0x8c, 0x70, 0xff, 0x77, 0x05, 0xb6, 0x1a, 0x1f,
+		0x96, 0x48, 0xf6, 0xfe, 0xd5, 0x5d, 0x98, 0xa5,
+		0x72, 0x1c, 0x84, 0x76, 0x3e, 0xb8, 0x87, 0x37,
+		0xdd, 0xd4, 0x3a, 0x45, 0xdd, 0x09, 0xd8, 0xe7,
+		0x09, 0x2f, 0x3e, 0x33, 0x9e, 0x7b, 0x8c, 0xe4,
+		0x85, 0x12, 0x4e, 0xf8, 0x06, 0xb7, 0xb1, 0x85,
+		0x24, 0x96, 0xd8, 0xfe, 0x87, 0x92, 0x81, 0xb1,
+		0xa3, 0x38, 0xb9, 0x56, 0xe1, 0xf6, 0x36, 0x41,
+		0xbb, 0xd6, 0x56, 0x69, 0x94, 0x57, 0xb3, 0xa4,
+		0xca, 0xa4, 0xe1, 0x02, 0x3b, 0x96, 0x71, 0xe0,
+		0xb2, 0x2f, 0x85, 0x48, 0x1b, 0x4a, 0x41, 0x80,
+		0x4b, 0x9c, 0xe0, 0xc9, 0x39, 0xb8, 0xb1, 0xca,
+		0x64, 0x77, 0x46, 0x58, 0xe6, 0x84, 0xd5, 0x2b,
+		0x65, 0xce, 0xe9, 0x09, 0xa3, 0xaa, 0xfb, 0x83,
+		0xa9, 0x28, 0x68, 0xfd, 0xcd, 0xfd, 0x76, 0x83,
+		0xe1, 0x20, 0x22, 0x77, 0x3a, 0xa3, 0xb2, 0x93,
+		0x14, 0x91, 0xfc, 0xe2, 0x17, 0x63, 0x2b, 0xa6,
+		0x29, 0x38, 0x7b, 0x9b, 0x8b, 0x15, 0x77, 0xd6,
+		0xaa, 0x92, 0x51, 0x53, 0x50, 0xff, 0xa0, 0x35,
+		0xa0, 0x59, 0x7d, 0xf0, 0x11, 0x23, 0x49, 0xdf,
+		0x5a, 0x21, 0xc2, 0xfe, 0x35, 0xa0, 0x1d, 0xe2,
+		0xae, 0xa2, 0x8a, 0x61, 0x5b, 0xf7, 0xf1, 0x1c,
+		0x1c, 0xec, 0xc4, 0xf6, 0xdc, 0xaa, 0xc8, 0xc2,
+		0xe5, 0xa1, 0x2e, 0x14, 0xe5, 0xc6, 0xc9, 0x73,
+		0x03, 0x78, 0xeb, 0xed, 0xe0, 0x3e, 0xc5, 0xf4,
+		0xf1, 0x50, 0xb2, 0x01, 0x91, 0x96, 0xf5, 0xbb,
+		0xe1, 0x32, 0xcd, 0xa8, 0x66, 0xbf, 0x73, 0x85,
+		0x94, 0xd6, 0x7e, 0x68, 0xc5, 0xe4, 0xed, 0xd5,
+		0xe3, 0x67, 0x4c, 0xa5, 0xb3, 0x1f, 0xdf, 0xf8,
+		0xb3, 0x73, 0x5a, 0xac, 0xeb, 0x46, 0x16, 0x24,
+		0xab, 0xca, 0xa4, 0xdd, 0x87, 0x0e, 0x24, 0x83,
+		0x32, 0x04, 0x4c, 0xd8, 0xda, 0x7d, 0xdc, 0xe3,
+		0x01, 0x93, 0xf3, 0xc1, 0x5b, 0xbd, 0xc3, 0x1d,
+		0x40, 0x62, 0xde, 0x94, 0x03, 0x85, 0x91, 0x2a,
+		0xa0, 0x25, 0x10, 0xd3, 0x32, 0x9f, 0x93, 0x00,
+		0xa7, 0x8a, 0xfa, 0x77, 0x7c, 0xaf, 0x4d, 0xc8,
+		0x7a, 0xf3, 0x16, 0x2b, 0xba, 0xeb, 0x74, 0x51,
+		0xb8, 0xdd, 0x32, 0xad, 0x68, 0x7d, 0xdd, 0xca,
+		0x60, 0x98, 0xc9, 0x9b, 0xb6, 0x5d, 0x4d, 0x3a,
+		0x66, 0x8a, 0xbe, 0x05, 0xf9, 0x0c, 0xc5, 0xba,
+		0x52, 0x82, 0x09, 0x1f, 0x5a, 0x66, 0x89, 0x69,
+		0xa3, 0x5d, 0x93, 0x50, 0x7d, 0x44, 0xc3, 0x2a,
+		0xb8, 0xab, 0xec, 0xa6, 0x5a, 0xae, 0x4a, 0x6a,
+		0xcd, 0xfd, 0xb6, 0xff, 0x3d, 0x98, 0x05, 0xd9,
+		0x5b, 0x29, 0xc4, 0x6f, 0xe0, 0x76, 0xe2, 0x3f,
+		0xec, 0xd7, 0xa4, 0x91, 0x63, 0xf5, 0x4e, 0x4b,
+		0xab, 0x20, 0x8c, 0x3a, 0x41, 0xed, 0x8b, 0x4b,
+		0xb9, 0x01, 0x21, 0xc0, 0x6d, 0xfd, 0x70, 0x5b,
+		0x20, 0x92, 0x41, 0x89, 0x74, 0xb7, 0xe9, 0x8b,
+		0xfc, 0x6d, 0x17, 0x3f, 0x7f, 0x89, 0x3d, 0x6b,
+		0x8f, 0xbc, 0xd2, 0x57, 0xe9, 0xc9, 0x6e, 0xa7,
+		0x19, 0x26, 0x18, 0xad, 0xef, 0xb5, 0x87, 0xbf,
+		0xb8, 0xa8, 0xd6, 0x7d, 0xdd, 0x5f, 0x94, 0x54,
+		0x09, 0x92, 0x2b, 0xf5, 0x04, 0xf7, 0x36, 0x69,
+		0x8e, 0xf4, 0xdc, 0x1d, 0x6e, 0x55, 0xbb, 0xe9,
+		0x13, 0x05, 0x83, 0x35, 0x9c, 0xed, 0xcf, 0x8c,
+		0x26, 0x8c, 0x7b, 0xc7, 0x0b, 0xba, 0xfd, 0xe2,
+		0x84, 0x5c, 0x2a, 0x79, 0x43, 0x99, 0xb2, 0xc3,
+		0x82, 0x87, 0xc8, 0xcd, 0x37, 0x6d, 0xa1, 0x2b,
+		0x39, 0xb2, 0x38, 0x99, 0xd9, 0xfc, 0x02, 0x15,
+		0x55, 0x21, 0x62, 0x59, 0xeb, 0x00, 0x86, 0x08,
+		0x20, 0xbe, 0x1a, 0x62, 0x4d, 0x7e, 0xdf, 0x68,
+		0x73, 0x5b, 0x5f, 0xaf, 0x84, 0x96, 0x2e, 0x1f,
+		0x6b, 0x03, 0xc9, 0xa6, 0x75, 0x18, 0xe9, 0xd4,
+		0xbd, 0xc8, 0xec, 0x9a, 0x5a, 0xb3, 0x99, 0xab,
+		0x5f, 0x7c, 0x08, 0x7f, 0x69, 0x4d, 0x52, 0xa2,
+		0x30, 0x17, 0x3b, 0x16, 0x15, 0x1b, 0x11, 0x62,
+		0x3e, 0x80, 0x4b, 0x85, 0x7c, 0x9c, 0xd1, 0x3a,
+		0x13, 0x01, 0x5e, 0x45, 0xf1, 0xc8, 0x5f, 0xcd,
+		0x0e, 0x21, 0xf5, 0x82, 0xd4, 0x7b, 0x5c, 0x45,
+		0x27, 0x6b, 0xef, 0xfe, 0xb8, 0xc0, 0x6f, 0xdc,
+		0x60, 0x7b, 0xe4, 0xd5, 0x75, 0x71, 0xe6, 0xe8,
+		0x7d, 0x6b, 0x6d, 0x80, 0xaf, 0x76, 0x41, 0x58,
+		0xb7, 0xac, 0xb7, 0x13, 0x2f, 0x81, 0xcc, 0xf9,
+		0x19, 0x97, 0xe8, 0xee, 0x40, 0x91, 0xfc, 0x89,
+		0x13, 0x1e, 0x67, 0x9a, 0xdb, 0x8f, 0x8f, 0xc7,
+		0x4a, 0xc9, 0xaf, 0x2f, 0x67, 0x01, 0x3c, 0xb8,
+		0xa8, 0x3e, 0x78, 0x93, 0x1b, 0xdf, 0xbb, 0x34,
+		0x0b, 0x1a, 0xfa, 0xc2, 0x2d, 0xc5, 0x1c, 0xec,
+		0x97, 0x4f, 0x48, 0x41, 0x15, 0x0e, 0x75, 0xed,
+		0x66, 0x8c, 0x17, 0x7f, 0xb1, 0x48, 0x13, 0xc1,
+		0xfb, 0x60, 0x06, 0xf9, 0x72, 0x41, 0x3e, 0xcf,
+		0x6e, 0xb6, 0xc8, 0xeb, 0x4b, 0x5a, 0xd2, 0x0c,
+		0x28, 0xda, 0x02, 0x7a, 0x46, 0x21, 0x42, 0xb5,
+		0x34, 0xda, 0xcb, 0x5e, 0xbd, 0x66, 0x5c, 0xca,
+		0xff, 0x52, 0x43, 0x89, 0xf9, 0x10, 0x9a, 0x9e,
+		0x9b, 0xe3, 0xb0, 0x51, 0xe9, 0xf3, 0x0a, 0x35,
+		0x77, 0x54, 0xcc, 0xac, 0xa6, 0xf1, 0x2e, 0x36,
+		0x89, 0xac, 0xc5, 0xc6, 0x62, 0x5a, 0xc0, 0x6d,
+		0xc4, 0xe1, 0xf7, 0x64, 0x30, 0xff, 0x11, 0x40,
+		0x13, 0x89, 0xd8, 0xd7, 0x73, 0x3f, 0x93, 0x08,
+		0x68, 0xab, 0x66, 0x09, 0x1a, 0xea, 0x78, 0xc9,
+		0x52, 0xf2, 0xfd, 0x93, 0x1b, 0x94, 0xbe, 0x5c,
+		0xe5, 0x00, 0x6e, 0x00, 0xb9, 0xea, 0x27, 0xaa,
+		0xb3, 0xee, 0xe3, 0xc8, 0x6a, 0xb0, 0xc1, 0x8e,
+		0x9b, 0x54, 0x40, 0x10, 0x96, 0x06, 0xe8, 0xb3,
+		0xf5, 0x55, 0x77, 0xd7, 0x5c, 0x94, 0xc1, 0x74,
+		0xf3, 0x07, 0x64, 0xac, 0x1c, 0xde, 0xc7, 0x22,
+		0xb0, 0xbf, 0x2a, 0x5a, 0xc0, 0x8f, 0x8a, 0x83,
+		0x50, 0xc2, 0x5e, 0x97, 0xa0, 0xbe, 0x49, 0x7e,
+		0x47, 0xaf, 0xa7, 0x20, 0x02, 0x35, 0xa4, 0x57,
+		0xd9, 0x26, 0x63, 0xdb, 0xf1, 0x34, 0x42, 0x89,
+		0x36, 0xd1, 0x77, 0x6f, 0xb1, 0xea, 0x79, 0x7e,
+		0x95, 0x10, 0x5a, 0xee, 0xa3, 0xae, 0x6f, 0xba,
+		0xa9, 0xef, 0x5a, 0x7e, 0x34, 0x03, 0x04, 0x07,
+		0x92, 0xd6, 0x07, 0x79, 0xaa, 0x14, 0x90, 0x97,
+		0x05, 0x4d, 0xa6, 0x27, 0x10, 0x5c, 0x25, 0x24,
+		0xcb, 0xcc, 0xf6, 0x77, 0x9e, 0x43, 0x23, 0xd4,
+		0x98, 0xef, 0x22, 0xa8, 0xad, 0xf2, 0x26, 0x08,
+		0x59, 0x69, 0xa4, 0xc3, 0x97, 0xe0, 0x5c, 0x6f,
+		0xeb, 0x3d, 0xd4, 0x62, 0x6e, 0x80, 0x61, 0x02,
+		0xf4, 0xfc, 0x94, 0x79, 0xbb, 0x4e, 0x6d, 0xd7,
+		0x30, 0x5b, 0x10, 0x11, 0x5a, 0x3d, 0xa7, 0x50,
+		0x1d, 0x9a, 0x13, 0x5f, 0x4f, 0xa8, 0xa7, 0xb6,
+		0x39, 0xc7, 0xea, 0xe6, 0x19, 0x61, 0x69, 0xc7,
+		0x9a, 0x3a, 0xeb, 0x9d, 0xdc, 0xf7, 0x06, 0x37,
+		0xbd, 0xac, 0xe3, 0x18, 0xff, 0xfe, 0x11, 0xdb,
+		0x67, 0x42, 0xb4, 0xea, 0xa8, 0xbd, 0xb0, 0x76,
+		0xd2, 0x74, 0x32, 0xc2, 0xa4, 0x9c, 0xe7, 0x60,
+		0xc5, 0x30, 0x9a, 0x57, 0x66, 0xcd, 0x0f, 0x02,
+		0x4c, 0xea, 0xe9, 0xd3, 0x2a, 0x5c, 0x09, 0xc2,
+		0xff, 0x6a, 0xde, 0x5d, 0xb7, 0xe9, 0x75, 0x6b,
+		0x29, 0x94, 0xd6, 0xf7, 0xc3, 0xdf, 0xfb, 0x70,
+		0xec, 0xb5, 0x8c, 0xb0, 0x78, 0x7a, 0xee, 0x52,
+		0x5f, 0x8c, 0xae, 0x85, 0xe5, 0x98, 0xa2, 0xb7,
+		0x7c, 0x02, 0x2a, 0xcc, 0x9e, 0xde, 0x99, 0x5f,
+		0x84, 0x20, 0xbb, 0xdc, 0xf2, 0xd2, 0x13, 0x46,
+		0x3c, 0xd6, 0x4d, 0xe7, 0x50, 0xef, 0x55, 0xc3,
+		0x96, 0x9f, 0xec, 0x6c, 0xd8, 0xe2, 0xea, 0xed,
+		0xc7, 0x33, 0xc9, 0xb3, 0x1c, 0x4f, 0x1d, 0x83,
+		0x1d, 0xe4, 0xdd, 0xb2, 0x24, 0x8f, 0xf9, 0xf5
+};
+
+
+static const uint8_t HMAC_SHA256_ciphertext_64B_digest[] = {
+		0xc5, 0x6d, 0x4f, 0x29, 0xf4, 0xd2, 0xcc, 0x87,
+		0x3c, 0x81, 0x02, 0x6d, 0x38, 0x7a, 0x67, 0x3e,
+		0x95, 0x9c, 0x5c, 0x8f, 0xda, 0x5c, 0x06, 0xe0,
+		0x65, 0xf1, 0x6c, 0x51, 0x52, 0x49, 0x3e, 0x5f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_128B_digest[] = {
+		0x76, 0x64, 0x2d, 0x69, 0x71, 0x5d, 0x6a, 0xd8,
+		0x9f, 0x74, 0x11, 0x2f, 0x58, 0xe0, 0x4a, 0x2f,
+		0x6c, 0x88, 0x5e, 0x4d, 0x9c, 0x79, 0x83, 0x1c,
+		0x8a, 0x14, 0xd0, 0x07, 0xfb, 0xbf, 0x6c, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_256B_digest[] = {
+		0x05, 0xa7, 0x44, 0xcd, 0x91, 0x8c, 0x95, 0xcf,
+		0x7b, 0x8f, 0xd3, 0x90, 0x86, 0x7e, 0x7b, 0xb9,
+		0x05, 0xd6, 0x6e, 0x7a, 0xc1, 0x7b, 0x26, 0xff,
+		0xd3, 0x4b, 0xe0, 0x22, 0x8b, 0xa8, 0x47, 0x52
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_512B_digest[] = {
+		0x08, 0xb7, 0x29, 0x54, 0x18, 0x7e, 0x97, 0x49,
+		0xc6, 0x7c, 0x9f, 0x94, 0xa5, 0x4f, 0xa2, 0x25,
+		0xd0, 0xe2, 0x30, 0x7b, 0xad, 0x93, 0xc9, 0x12,
+		0x0f, 0xf0, 0xf0, 0x71, 0xc2, 0xf6, 0x53, 0x8f
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_768B_digest[] = {
+		0xe4, 0x3e, 0x73, 0x93, 0x03, 0xaf, 0x6f, 0x9c,
+		0xca, 0x57, 0x3b, 0x4a, 0x6e, 0x83, 0x58, 0xf5,
+		0x66, 0xc2, 0xb4, 0xa7, 0xe0, 0xee, 0x63, 0x6b,
+		0x48, 0xb7, 0x50, 0x45, 0x69, 0xdf, 0x5c, 0x5b
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1024B_digest[] = {
+		0x03, 0xb9, 0x96, 0x26, 0xdc, 0x1c, 0xab, 0xe2,
+		0xf5, 0x70, 0x55, 0x15, 0x67, 0x6e, 0x48, 0x11,
+		0xe7, 0x67, 0xea, 0xfa, 0x5c, 0x6b, 0x28, 0x22,
+		0xc9, 0x0e, 0x67, 0x04, 0xb3, 0x71, 0x7f, 0x88
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1280B_digest[] = {
+		0x01, 0x91, 0xb8, 0x78, 0xd3, 0x21, 0x74, 0xa5,
+		0x1c, 0x8b, 0xd4, 0xd2, 0xc0, 0x49, 0xd7, 0xd2,
+		0x16, 0x46, 0x66, 0x85, 0x50, 0x6d, 0x08, 0xcc,
+		0xc7, 0x0a, 0xa3, 0x71, 0xcc, 0xde, 0xee, 0xdc
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1536B_digest[] = {
+		0xf2, 0xe5, 0xe9, 0x57, 0x53, 0xd7, 0x69, 0x28,
+		0x7b, 0x69, 0xb5, 0x49, 0xa3, 0x31, 0x56, 0x5f,
+		0xa4, 0xe9, 0x87, 0x26, 0x2f, 0xe0, 0x2d, 0xd6,
+		0x08, 0x44, 0x01, 0x71, 0x0c, 0x93, 0x85, 0x84
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_1792B_digest[] = {
+		0xf6, 0x57, 0x62, 0x01, 0xbf, 0x2d, 0xea, 0x4a,
+		0xef, 0x43, 0x85, 0x60, 0x18, 0xdf, 0x8b, 0xb4,
+		0x60, 0xc0, 0xfd, 0x2f, 0x90, 0x15, 0xe6, 0x91,
+		0x56, 0x61, 0x68, 0x7f, 0x5e, 0x92, 0xa8, 0xdd
+};
+
+static const uint8_t HMAC_SHA256_ciphertext_2048B_digest[] = {
+		0x81, 0x1a, 0x29, 0xbc, 0x6b, 0x9f, 0xbb, 0xb8,
+		0xef, 0x71, 0x7b, 0x1f, 0x6f, 0xd4, 0x7e, 0x68,
+		0x3a, 0x9c, 0xb9, 0x98, 0x22, 0x81, 0xfa, 0x95,
+		0xee, 0xbc, 0x7f, 0x23, 0x29, 0x88, 0x76, 0xb8
+};
+
+struct crypto_data_params {
+	const char *name;
+	uint16_t length;
+	const char *plaintext;
+	struct crypto_expected_output {
+		const uint8_t *ciphertext;
+		const uint8_t *digest;
+	} expected;
+};
+
+#define MAX_PACKET_SIZE_INDEX	10
+
+struct crypto_data_params aes_cbc_hmac_sha256_output[MAX_PACKET_SIZE_INDEX] = {
+	{ "64B", 64, &plaintext_quote[sizeof(plaintext_quote) - 1 - 64],
+		{ AES_CBC_ciphertext_64B, HMAC_SHA256_ciphertext_64B_digest } },
+	{ "128B", 128, &plaintext_quote[sizeof(plaintext_quote) - 1 - 128],
+		{ AES_CBC_ciphertext_128B, HMAC_SHA256_ciphertext_128B_digest } },
+	{ "256B", 256, &plaintext_quote[sizeof(plaintext_quote) - 1 - 256],
+		{ AES_CBC_ciphertext_256B, HMAC_SHA256_ciphertext_256B_digest } },
+	{ "512B", 512, &plaintext_quote[sizeof(plaintext_quote) - 1 - 512],
+		{ AES_CBC_ciphertext_512B, HMAC_SHA256_ciphertext_512B_digest } },
+	{ "768B", 768, &plaintext_quote[sizeof(plaintext_quote) - 1 - 768],
+		{ AES_CBC_ciphertext_768B, HMAC_SHA256_ciphertext_768B_digest } },
+	{ "1024B", 1024, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1024],
+		{ AES_CBC_ciphertext_1024B, HMAC_SHA256_ciphertext_1024B_digest } },
+	{ "1280B", 1280, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1280],
+		{ AES_CBC_ciphertext_1280B, HMAC_SHA256_ciphertext_1280B_digest } },
+	{ "1536B", 1536, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1536],
+		{ AES_CBC_ciphertext_1536B, HMAC_SHA256_ciphertext_1536B_digest } },
+	{ "1792B", 1792, &plaintext_quote[sizeof(plaintext_quote) - 1 - 1792],
+		{ AES_CBC_ciphertext_1792B, HMAC_SHA256_ciphertext_1792B_digest } },
+	{ "2048B", 2048, &plaintext_quote[sizeof(plaintext_quote) - 1 - 2048],
+		{ AES_CBC_ciphertext_2048B, HMAC_SHA256_ciphertext_2048B_digest } }
+};
+
+
+static int
+test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
+{
+	uint32_t num_to_submit = 2048, max_outstanding_reqs = 512;
+	struct rte_mbuf *rx_mbufs[num_to_submit], *tx_mbufs[num_to_submit];
+	uint64_t failed_polls, retries, start_cycles, end_cycles, total_cycles = 0;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, burst_size, num_sent, num_received;
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+		&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	/* Generate Crypto op data structure(s) */
+	for (b = 0; b < num_to_submit ; b++) {
+		tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+				(const char *)data_params[0].expected.ciphertext,
+				data_params[0].length, 0);
+		TEST_ASSERT_NOT_NULL(tx_mbufs[b], "Failed to allocate tx_buf");
+
+		ut_params->digest = (uint8_t *)rte_pktmbuf_append(tx_mbufs[b],
+				DIGEST_BYTE_LENGTH_SHA256);
+		TEST_ASSERT_NOT_NULL(ut_params->digest, "no room to append digest");
+
+		rte_memcpy(ut_params->digest, data_params[0].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+		struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+				ts_params->mbuf_ol_pool, RTE_PKTMBUF_OL_CRYPTO);
+		TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+		struct rte_crypto_op *cop = &ol->op.crypto;
+
+		rte_crypto_op_attach_session(cop, ut_params->sess);
+
+		cop->digest.data = ut_params->digest;
+		cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(tx_mbufs[b],
+				data_params[0].length);
+		cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+		cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+				CIPHER_IV_LENGTH_AES_CBC);
+		cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+		cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+		rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+		cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_cipher.length = data_params[0].length;
+
+		cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+		cop->data.to_hash.length = data_params[0].length;
+
+		rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+	}
+
+	printf("\nTest to measure the IA cycle cost using AES128_CBC_SHA256_HMAC "
+			"algorithm with a constant request size of %u.",
+			data_params[0].length);
+	printf("\nThis test will keep retries at 0 and only measure IA cycle "
+			"cost for each request.");
+	printf("\nDev No\tQP No\tNum Sent\tNum Received\tTx/Rx burst");
+	printf("\tRetries (Device Busy)\tAverage IA cycle cost "
+			"(assuming 0 retries)");
+	for (b = 2; b <= 128 ; b *= 2) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+		burst_size = b;
+		total_cycles = 0;
+		while (num_sent < num_to_submit) {
+			start_cycles = rte_rdtsc_precise();
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0,
+					&tx_mbufs[num_sent],
+					((num_to_submit-num_sent) < burst_size) ?
+					num_to_submit-num_sent : burst_size);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += (end_cycles - start_cycles);
+			/*
+			 * Wait until requests have been sent.
+			 */
+			rte_delay_ms(1);
+
+			start_cycles = rte_rdtsc_precise();
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+			end_cycles = rte_rdtsc_precise();
+			total_cycles += end_cycles - start_cycles;
+		}
+		while (num_received != num_to_submit) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+						0, rx_mbufs, burst_size);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+
+		printf("\n%u\t%u\t\%u\t\t%u\t\t%u", dev_num, 0,
+					num_sent, num_received, burst_size);
+		printf("\t\t%"PRIu64, retries);
+		printf("\t\t\t%"PRIu64, total_cycles/num_received);
+	}
+	printf("\n");
+
+	for (b = 0; b < max_outstanding_reqs ; b++) {
+		struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+		if (ol) {
+			do {
+				rte_pktmbuf_offload_free(ol);
+				ol = ol->next;
+			} while (ol != NULL);
+		}
+		rte_pktmbuf_free(tx_mbufs[b]);
+	}
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(uint16_t dev_num)
+{
+	uint16_t index;
+	uint32_t burst_sent, burst_received;
+	uint32_t b, num_sent, num_received, throughput;
+	uint64_t failed_polls, retries, start_cycles, end_cycles;
+	const uint64_t mhz = rte_get_tsc_hz()/1000000;
+	double mmps;
+	struct rte_mbuf *rx_mbufs[DEFAULT_BURST_SIZE], *tx_mbufs[DEFAULT_BURST_SIZE];
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct crypto_data_params *data_params = aes_cbc_hmac_sha256_output;
+
+	if (rte_cryptodev_count() == 0) {
+		printf("\nNo crypto devices available. Is kernel driver loaded?\n");
+		return TEST_FAILED;
+	}
+
+	/* Setup Cipher Parameters */
+	ut_params->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	ut_params->cipher_xform.next = &ut_params->auth_xform;
+
+	ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+	ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
+	ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
+
+	/* Setup HMAC Parameters */
+	ut_params->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	ut_params->auth_xform.next = NULL;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
+	ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+	ut_params->auth_xform.auth.key.data = hmac_sha256_key;
+	ut_params->auth_xform.auth.key.length = HMAC_KEY_LENGTH_SHA256;
+	ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
+
+	/* Create Crypto session*/
+	ut_params->sess = rte_cryptodev_session_create(ts_params->dev_id,
+			&ut_params->cipher_xform);
+
+	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+
+	printf("\nThroughput test which will continually attempt to send "
+			"AES128_CBC_SHA256_HMAC requests with a constant burst "
+			"size of %u while varying payload sizes", DEFAULT_BURST_SIZE);
+	printf("\nDev No\tQP No\tReq Size(B)\tNum Sent\tNum Received\t"
+			"Mrps\tThoughput(Mbps)");
+	printf("\tRetries (Attempted a burst, but the device was busy)");
+	for (index = 0; index < MAX_PACKET_SIZE_INDEX; index++) {
+		num_sent = 0;
+		num_received = 0;
+		retries = 0;
+		failed_polls = 0;
+
+		/* Generate Crypto op data structure(s) */
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			tx_mbufs[b] = setup_test_string(ts_params->mbuf_mp,
+					data_params[index].plaintext,
+					data_params[index].length,
+					0);
+
+			ut_params->digest = (uint8_t *)rte_pktmbuf_append(
+				tx_mbufs[b], DIGEST_BYTE_LENGTH_SHA256);
+			TEST_ASSERT_NOT_NULL(ut_params->digest,	"no room to append digest");
+
+			rte_memcpy(ut_params->digest, data_params[index].expected.digest,
+			DIGEST_BYTE_LENGTH_SHA256);
+
+			struct rte_mbuf_offload *ol = rte_pktmbuf_offload_alloc(
+						ts_params->mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+			TEST_ASSERT_NOT_NULL(ol, "Failed to allocate pktmbuf offload");
+
+			struct rte_crypto_op *cop = &ol->op.crypto;
+
+			rte_crypto_op_attach_session(cop, ut_params->sess);
+
+			cop->digest.data = ut_params->digest;
+			cop->digest.phys_addr = rte_pktmbuf_mtophys_offset(
+				tx_mbufs[b], data_params[index].length);
+			cop->digest.length = DIGEST_BYTE_LENGTH_SHA256;
+
+			cop->iv.data = (uint8_t *)rte_pktmbuf_prepend(tx_mbufs[b],
+					CIPHER_IV_LENGTH_AES_CBC);
+			cop->iv.phys_addr = rte_pktmbuf_mtophys(tx_mbufs[b]);
+			cop->iv.length = CIPHER_IV_LENGTH_AES_CBC;
+
+			rte_memcpy(cop->iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+
+			cop->data.to_cipher.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_cipher.length = data_params[index].length;
+
+			cop->data.to_hash.offset = CIPHER_IV_LENGTH_AES_CBC;
+			cop->data.to_hash.length = data_params[index].length;
+
+			rte_pktmbuf_offload_attach(tx_mbufs[b], ol);
+		}
+		start_cycles = rte_rdtsc_precise();
+		while (num_sent < DEFAULT_NUM_REQS_TO_SUBMIT) {
+			burst_sent = rte_cryptodev_enqueue_burst(dev_num, 0, tx_mbufs,
+				((DEFAULT_NUM_REQS_TO_SUBMIT-num_sent) < DEFAULT_BURST_SIZE) ?
+				DEFAULT_NUM_REQS_TO_SUBMIT-num_sent : DEFAULT_BURST_SIZE);
+			if (burst_sent == 0)
+				retries++;
+			else
+				num_sent += burst_sent;
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num,
+					0, rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		while (num_received != DEFAULT_NUM_REQS_TO_SUBMIT) {
+			if (gbl_cryptodev_preftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD)
+				rte_cryptodev_enqueue_burst(dev_num, 0, NULL, 0);
+
+			burst_received = rte_cryptodev_dequeue_burst(dev_num, 0,
+						rx_mbufs, DEFAULT_BURST_SIZE);
+			if (burst_received == 0)
+				failed_polls++;
+			else
+				num_received += burst_received;
+		}
+		end_cycles = rte_rdtsc_precise();
+		mmps = (double)num_received*mhz/(end_cycles - start_cycles);
+		throughput = mmps*data_params[index].length*8;
+		printf("\n%u\t%u\t%u\t\t%u\t%u", dev_num, 0,
+				data_params[index].length, num_sent, num_received);
+		printf("\t%.2f\t%u", mmps, throughput);
+		printf("\t\t%"PRIu64, retries);
+		for (b = 0; b < DEFAULT_BURST_SIZE ; b++) {
+			struct rte_mbuf_offload *ol = tx_mbufs[b]->offload_ops;
+
+			if (ol) {
+				do {
+					rte_pktmbuf_offload_free(ol);
+					ol = ol->next;
+				} while (ol != NULL);
+			}
+			rte_pktmbuf_free(tx_mbufs[b]);
+		}
+	}
+	printf("\n");
+	return TEST_SUCCESS;
+}
+
+static int
+test_perf_encrypt_digest_vary_req_size(void)
+{
+	return test_perf_AES_CBC_HMAC_SHA256_encrypt_digest_vary_req_size(
+			testsuite_params.dev_id);
+}
+
+static int
+test_perf_vary_burst_size(void)
+{
+	return test_perf_crypto_qp_vary_burst_size(testsuite_params.dev_id);
+}
+
+
+static struct unit_test_suite cryptodev_testsuite  = {
+	.suite_name = "Crypto Device Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_encrypt_digest_vary_req_size),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_perf_vary_burst_size),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static int
+perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_preftest_devtype = RTE_CRYPTODEV_QAT_PMD;
+
+	return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
+static struct test_command cryptodev_aesni_mb_perf_cmd = {
+	.command = "cryptodev_aesni_mb_perftest",
+	.callback = perftest_aesni_mb_cryptodev,
+};
+
+static struct test_command cryptodev_qat_perf_cmd = {
+	.command = "cryptodev_qat_perftest",
+	.callback = perftest_qat_cryptodev,
+};
+
+REGISTER_TEST_COMMAND(cryptodev_aesni_mb_perf_cmd);
+REGISTER_TEST_COMMAND(cryptodev_qat_perf_cmd);
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 388cf11..2d98958 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -4020,7 +4020,7 @@ test_close_bonded_device(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	if (test_params->pkt_eth_hdr != NULL) {
@@ -4029,7 +4029,7 @@ testsuite_teardown(void)
 	}
 
 	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	remove_slaves_and_stop_bonded_device();
 }
 
 static void
@@ -4993,7 +4993,7 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 460539d..713368d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -453,7 +453,7 @@ test_setup(void)
 	return 0;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -467,8 +467,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 /*
@@ -1390,7 +1388,8 @@ static struct unit_test_suite link_bonding_mode4_test_suite  = {
 		TEST_CASE_NAMED("test_mode4_tx_burst", test_mode4_tx_burst_wrapper),
 		TEST_CASE_NAMED("test_mode4_marker", test_mode4_marker_wrapper),
 		TEST_CASE_NAMED("test_mode4_expired", test_mode4_expired_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
 
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e6714b4..0a3162e 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -586,7 +586,7 @@ test_setup(void)
 	return TEST_SUCCESS;
 }
 
-static int
+static void
 testsuite_teardown(void)
 {
 	struct slave_conf *port;
@@ -600,8 +600,6 @@ testsuite_teardown(void)
 
 	FOR_EACH_PORT(i, port)
 		rte_eth_dev_stop(port->port_id);
-
-	return 0;
 }
 
 static int
@@ -661,7 +659,8 @@ static struct unit_test_suite link_bonding_rssconf_test_suite  = {
 		TEST_CASE_NAMED("test_setup", test_setup_wrapper),
 		TEST_CASE_NAMED("test_rss", test_rss_wrapper),
 		TEST_CASE_NAMED("test_rss_lazy", test_rss_lazy_wrapper),
-		{ NULL, NULL, NULL, NULL, NULL } /**< NULL terminate unit test array */
+
+		TEST_CASES_END()
 	}
 };
 
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [dpdk-dev] [PATCH v8 10/10] l2fwd-crypto: crypto
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (8 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
@ 2015-11-25 13:25               ` Declan Doherty
  2015-11-25 17:44               ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Thomas Monjalon
  10 siblings, 0 replies; 115+ messages in thread
From: Declan Doherty @ 2015-11-25 13:25 UTC (permalink / raw)
  To: dev

This patch creates a new sample applicaiton based off the l2fwd
application which performs specified crypto operations on IP packet
payloads which are forwarding.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>

---
 MAINTAINERS                    |    1 +
 examples/Makefile              |    1 +
 examples/l2fwd-crypto/Makefile |   50 ++
 examples/l2fwd-crypto/main.c   | 1489 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 1541 insertions(+)
 create mode 100644 examples/l2fwd-crypto/Makefile
 create mode 100644 examples/l2fwd-crypto/main.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 74aa169..0685a56 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -206,6 +206,7 @@ F: lib/librte_cryptodev
 F: docs/guides/cryptodevs
 F: app/test/test_cryptodev.c
 F: app/test/test_cryptodev_perf.c
+F: examples/l2fwd-crypto
 
 Drivers
 -------
diff --git a/examples/Makefile b/examples/Makefile
index 830e31a..2da9b49 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -75,5 +75,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_XEN_DOM0) += vhost_xen
 DIRS-y += vmdq
 DIRS-y += vmdq_dcb
 DIRS-$(CONFIG_RTE_LIBRTE_POWER) += vm_power_manager
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += l2fwd-crypto
 
 include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/l2fwd-crypto/Makefile b/examples/l2fwd-crypto/Makefile
new file mode 100644
index 0000000..e8224ca
--- /dev/null
+++ b/examples/l2fwd-crypto/Makefile
@@ -0,0 +1,50 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = l2fwd-crypto
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
new file mode 100644
index 0000000..0b4414b
--- /dev/null
+++ b/examples/l2fwd-crypto/main.c
@@ -0,0 +1,1489 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_ip.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_offload.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_per_lcore.h>
+#include <rte_prefetch.h>
+#include <rte_random.h>
+#include <rte_ring.h>
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define NB_MBUF   8192
+
+#define MAX_PKT_BURST 32
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint64_t l2fwd_enabled_port_mask;
+static uint64_t l2fwd_enabled_crypto_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+
+struct pkt_buffer {
+	unsigned len;
+	struct rte_mbuf *buffer[MAX_PKT_BURST];
+};
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+enum l2fwd_crypto_xform_chain {
+	L2FWD_CRYPTO_CIPHER_HASH,
+	L2FWD_CRYPTO_HASH_CIPHER
+};
+
+/** l2fwd crypto application command line options */
+struct l2fwd_crypto_options {
+	unsigned portmask;
+	unsigned nb_ports_per_lcore;
+	unsigned refresh_period;
+	unsigned single_lcore:1;
+	unsigned no_stats_printing:1;
+
+	enum rte_cryptodev_type cdev_type;
+	unsigned sessionless:1;
+
+	enum l2fwd_crypto_xform_chain xform_chain;
+
+	struct rte_crypto_xform cipher_xform;
+	uint8_t ckey_data[32];
+
+	struct rte_crypto_key iv_key;
+	uint8_t ivkey_data[16];
+
+	struct rte_crypto_xform auth_xform;
+	uint8_t akey_data[128];
+};
+
+/** l2fwd crypto lcore params */
+struct l2fwd_crypto_params {
+	uint8_t dev_id;
+	uint8_t qp_id;
+
+	unsigned digest_length;
+	unsigned block_size;
+
+	struct rte_crypto_key iv_key;
+	struct rte_cryptodev_session *session;
+};
+
+/** lcore configuration */
+struct lcore_queue_conf {
+	unsigned nb_rx_ports;
+	unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+
+	unsigned nb_crypto_devs;
+	unsigned cryptodev_list[MAX_RX_QUEUE_PER_LCORE];
+
+	struct pkt_buffer crypto_pkt_buf[RTE_MAX_ETHPORTS];
+	struct pkt_buffer tx_pkt_buf[RTE_MAX_ETHPORTS];
+} __rte_cache_aligned;
+
+struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static const struct rte_eth_conf port_conf = {
+	.rxmode = {
+		.split_hdr_size = 0,
+		.header_split   = 0, /**< Header Split disabled */
+		.hw_ip_checksum = 0, /**< IP checksum offload disabled */
+		.hw_vlan_filter = 0, /**< VLAN filtering disabled */
+		.jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+		.hw_strip_crc   = 0, /**< CRC stripped by hardware */
+	},
+	.txmode = {
+		.mq_mode = ETH_MQ_TX_NONE,
+	},
+};
+
+struct rte_mempool *l2fwd_pktmbuf_pool;
+struct rte_mempool *l2fwd_mbuf_ol_pool;
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+	uint64_t tx;
+	uint64_t rx;
+
+	uint64_t crypto_enqueued;
+	uint64_t crypto_dequeued;
+
+	uint64_t dropped;
+} __rte_cache_aligned;
+
+struct l2fwd_crypto_statistics {
+	uint64_t enqueued;
+	uint64_t dequeued;
+
+	uint64_t errors;
+} __rte_cache_aligned;
+
+struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+struct l2fwd_crypto_statistics crypto_statistics[RTE_MAX_ETHPORTS];
+
+/* A tsc-based timer responsible for triggering statistics printout */
+#define TIMER_MILLISECOND 2000000ULL /* around 1ms at 2 Ghz */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* default period is 10 seconds */
+static int64_t timer_period = 10 * TIMER_MILLISECOND * 1000;
+
+uint64_t total_packets_dropped = 0, total_packets_tx = 0, total_packets_rx = 0,
+	total_packets_enqueued = 0, total_packets_dequeued = 0,
+	total_packets_errors = 0;
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(void)
+{
+	unsigned portid;
+	uint64_t cdevid;
+
+
+	const char clr[] = { 27, '[', '2', 'J', '\0' };
+	const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+		/* Clear screen and move to top left */
+	printf("%s%s", clr, topLeft);
+
+	printf("\nPort statistics ====================================");
+
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+			continue;
+		printf("\nStatistics for port %u ------------------------------"
+			   "\nPackets sent: %32"PRIu64
+			   "\nPackets received: %28"PRIu64
+			   "\nPackets dropped: %29"PRIu64,
+			   portid,
+			   port_statistics[portid].tx,
+			   port_statistics[portid].rx,
+			   port_statistics[portid].dropped);
+
+		total_packets_dropped += port_statistics[portid].dropped;
+		total_packets_tx += port_statistics[portid].tx;
+		total_packets_rx += port_statistics[portid].rx;
+	}
+	printf("\nCrypto statistics ==================================");
+
+	for (cdevid = 0; cdevid < RTE_CRYPTO_MAX_DEVS; cdevid++) {
+		/* skip disabled ports */
+		if ((l2fwd_enabled_crypto_mask & (1lu << cdevid)) == 0)
+			continue;
+		printf("\nStatistics for cryptodev %"PRIu64
+				" -------------------------"
+			   "\nPackets enqueued: %28"PRIu64
+			   "\nPackets dequeued: %28"PRIu64
+			   "\nPackets errors: %30"PRIu64,
+			   cdevid,
+			   crypto_statistics[cdevid].enqueued,
+			   crypto_statistics[cdevid].dequeued,
+			   crypto_statistics[cdevid].errors);
+
+		total_packets_enqueued += crypto_statistics[cdevid].enqueued;
+		total_packets_dequeued += crypto_statistics[cdevid].dequeued;
+		total_packets_errors += crypto_statistics[cdevid].errors;
+	}
+	printf("\nAggregate statistics ==============================="
+		   "\nTotal packets received: %22"PRIu64
+		   "\nTotal packets enqueued: %22"PRIu64
+		   "\nTotal packets dequeued: %22"PRIu64
+		   "\nTotal packets sent: %26"PRIu64
+		   "\nTotal packets dropped: %23"PRIu64
+		   "\nTotal packets crypto errors: %17"PRIu64,
+		   total_packets_rx,
+		   total_packets_enqueued,
+		   total_packets_dequeued,
+		   total_packets_tx,
+		   total_packets_dropped,
+		   total_packets_errors);
+	printf("\n====================================================\n");
+}
+
+
+
+static int
+l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+
+	pkt_buffer = (struct rte_mbuf **)
+			qconf->crypto_pkt_buf[cparams->dev_id].buffer;
+
+	ret = rte_cryptodev_enqueue_burst(cparams->dev_id, cparams->qp_id,
+			pkt_buffer, (uint16_t) n);
+	crypto_statistics[cparams->dev_id].enqueued += ret;
+	if (unlikely(ret < n)) {
+		crypto_statistics[cparams->dev_id].errors += (n - ret);
+		do {
+			rte_pktmbuf_offload_free(pkt_buffer[ret]->offload_ops);
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+static int
+l2fwd_crypto_enqueue(struct rte_mbuf *m, struct l2fwd_crypto_params *cparams)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->crypto_pkt_buf[cparams->dev_id].len;
+	qconf->crypto_pkt_buf[cparams->dev_id].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (len == MAX_PKT_BURST) {
+		l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
+		len = 0;
+	}
+
+	qconf->crypto_pkt_buf[cparams->dev_id].len = len;
+	return 0;
+}
+
+static int
+l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
+		struct rte_mbuf_offload *ol,
+		struct l2fwd_crypto_params *cparams)
+{
+	struct ether_hdr *eth_hdr;
+	struct ipv4_hdr *ip_hdr;
+
+	unsigned ipdata_offset, pad_len, data_len;
+	char *padding;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (eth_hdr->ether_type != rte_cpu_to_be_16(ETHER_TYPE_IPv4))
+		return -1;
+
+	ipdata_offset = sizeof(struct ether_hdr);
+
+	ip_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+			ipdata_offset);
+
+	ipdata_offset += (ip_hdr->version_ihl & IPV4_HDR_IHL_MASK)
+			* IPV4_IHL_MULTIPLIER;
+
+
+	/* Zero pad data to be crypto'd so it is block aligned */
+	data_len  = rte_pktmbuf_data_len(m) - ipdata_offset;
+	pad_len = data_len % cparams->block_size ? cparams->block_size -
+			(data_len % cparams->block_size) : 0;
+
+	if (pad_len) {
+		padding = rte_pktmbuf_append(m, pad_len);
+		if (unlikely(!padding))
+			return -1;
+
+		data_len += pad_len;
+		memset(padding, 0, pad_len);
+	}
+
+	/* Set crypto operation data parameters */
+	rte_crypto_op_attach_session(&ol->op.crypto, cparams->session);
+
+	/* Append space for digest to end of packet */
+	ol->op.crypto.digest.data = (uint8_t *)rte_pktmbuf_append(m,
+			cparams->digest_length);
+	ol->op.crypto.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
+			rte_pktmbuf_pkt_len(m) - cparams->digest_length);
+	ol->op.crypto.digest.length = cparams->digest_length;
+
+	ol->op.crypto.iv.data = cparams->iv_key.data;
+	ol->op.crypto.iv.phys_addr = cparams->iv_key.phys_addr;
+	ol->op.crypto.iv.length = cparams->iv_key.length;
+
+	ol->op.crypto.data.to_cipher.offset = ipdata_offset;
+	ol->op.crypto.data.to_cipher.length = data_len;
+
+	ol->op.crypto.data.to_hash.offset = ipdata_offset;
+	ol->op.crypto.data.to_hash.length = data_len;
+
+	rte_pktmbuf_offload_attach(m, ol);
+
+	return l2fwd_crypto_enqueue(m, cparams);
+}
+
+
+/* Send the burst of packets on an output interface */
+static int
+l2fwd_send_burst(struct lcore_queue_conf *qconf, unsigned n, uint8_t port)
+{
+	struct rte_mbuf **pkt_buffer;
+	unsigned ret;
+	unsigned queueid = 0;
+
+	pkt_buffer = (struct rte_mbuf **)qconf->tx_pkt_buf[port].buffer;
+
+	ret = rte_eth_tx_burst(port, (uint16_t) queueid, pkt_buffer,
+			(uint16_t)n);
+	port_statistics[port].tx += ret;
+	if (unlikely(ret < n)) {
+		port_statistics[port].dropped += (n - ret);
+		do {
+			rte_pktmbuf_free(pkt_buffer[ret]);
+		} while (++ret < n);
+	}
+
+	return 0;
+}
+
+/* Enqueue packets for TX and prepare them to be sent */
+static int
+l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
+{
+	unsigned lcore_id, len;
+	struct lcore_queue_conf *qconf;
+
+	lcore_id = rte_lcore_id();
+
+	qconf = &lcore_queue_conf[lcore_id];
+	len = qconf->tx_pkt_buf[port].len;
+	qconf->tx_pkt_buf[port].buffer[len] = m;
+	len++;
+
+	/* enough pkts to be sent */
+	if (unlikely(len == MAX_PKT_BURST)) {
+		l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
+		len = 0;
+	}
+
+	qconf->tx_pkt_buf[port].len = len;
+	return 0;
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
+{
+	struct ether_hdr *eth;
+	void *tmp;
+	unsigned dst_port;
+
+	dst_port = l2fwd_dst_ports[portid];
+	eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	/* 02:00:00:00:00:xx */
+	tmp = &eth->d_addr.addr_bytes[0];
+	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+	/* src addr */
+	ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+
+	l2fwd_send_packet(m, (uint8_t) dst_port);
+}
+
+/** Generate random key */
+static void
+generate_random_key(uint8_t *key, unsigned length)
+{
+	unsigned i;
+
+	for (i = 0; i < length; i++)
+		key[i] = rand() % 0xff;
+}
+
+static struct rte_cryptodev_session *
+initialize_crypto_session(struct l2fwd_crypto_options *options,
+		uint8_t cdev_id)
+{
+	struct rte_crypto_xform *first_xform;
+
+	if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
+		first_xform = &options->cipher_xform;
+		first_xform->next = &options->auth_xform;
+	} else {
+		first_xform = &options->auth_xform;
+		first_xform->next = &options->cipher_xform;
+	}
+
+	/* Setup Cipher Parameters */
+	return rte_cryptodev_session_create(cdev_id, first_xform);
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
+/* main processing loop */
+static void
+l2fwd_main_loop(struct l2fwd_crypto_options *options)
+{
+	struct rte_mbuf *m, *pkts_burst[MAX_PKT_BURST];
+	unsigned lcore_id = rte_lcore_id();
+	uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+	unsigned i, j, portid, nb_rx;
+	struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id];
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /
+			US_PER_S * BURST_TX_DRAIN_US;
+	struct l2fwd_crypto_params *cparams;
+	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+
+	if (qconf->nb_rx_ports == 0) {
+		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+		return;
+	}
+
+	RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+	l2fwd_crypto_options_print(options);
+
+	for (i = 0; i < qconf->nb_rx_ports; i++) {
+
+		portid = qconf->rx_port_list[i];
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+			portid);
+	}
+
+	for (i = 0; i < qconf->nb_crypto_devs; i++) {
+		port_cparams[i].dev_id = qconf->cryptodev_list[i];
+		port_cparams[i].qp_id = 0;
+
+		port_cparams[i].block_size = 64;
+		port_cparams[i].digest_length = 20;
+
+		port_cparams[i].iv_key.data =
+				(uint8_t *)rte_malloc(NULL, 16, 8);
+		port_cparams[i].iv_key.length = 16;
+		port_cparams[i].iv_key.phys_addr = rte_malloc_virt2phy(
+				(void *)port_cparams[i].iv_key.data);
+		generate_random_key(port_cparams[i].iv_key.data,
+				sizeof(cparams[i].iv_key.length));
+
+		port_cparams[i].session = initialize_crypto_session(options,
+				port_cparams[i].dev_id);
+
+		if (port_cparams[i].session == NULL)
+			return;
+		RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
+				port_cparams[i].dev_id);
+	}
+
+	while (1) {
+
+		cur_tsc = rte_rdtsc();
+
+		/*
+		 * TX burst queue drain
+		 */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+
+			for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+				if (qconf->tx_pkt_buf[portid].len == 0)
+					continue;
+				l2fwd_send_burst(&lcore_queue_conf[lcore_id],
+						 qconf->tx_pkt_buf[portid].len,
+						 (uint8_t) portid);
+				qconf->tx_pkt_buf[portid].len = 0;
+			}
+
+			/* if timer is enabled */
+			if (timer_period > 0) {
+
+				/* advance the timer */
+				timer_tsc += diff_tsc;
+
+				/* if timer has reached its timeout */
+				if (unlikely(timer_tsc >=
+						(uint64_t)timer_period)) {
+
+					/* do this only on master core */
+					if (lcore_id == rte_get_master_lcore() &&
+							!options->no_stats_printing) {
+						print_stats();
+						/* reset the timer */
+						timer_tsc = 0;
+					}
+				}
+			}
+
+			prev_tsc = cur_tsc;
+		}
+
+		/*
+		 * Read packet from RX queues
+		 */
+		for (i = 0; i < qconf->nb_rx_ports; i++) {
+			struct rte_mbuf_offload *ol;
+
+			portid = qconf->rx_port_list[i];
+
+			cparams = &port_cparams[i];
+
+			nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,
+						 pkts_burst, MAX_PKT_BURST);
+
+			port_statistics[portid].rx += nb_rx;
+
+			/* Enqueue packets from Crypto device*/
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				ol = rte_pktmbuf_offload_alloc(
+						l2fwd_mbuf_ol_pool,
+						RTE_PKTMBUF_OL_CRYPTO);
+				/*
+				 * If we can't allocate a offload, then drop
+				 * the rest of the burst and dequeue and
+				 * process the packets to free offload structs
+				 */
+				if (unlikely(ol == NULL)) {
+					for (; j < nb_rx; j++) {
+						rte_pktmbuf_free(pkts_burst[j]);
+						port_statistics[portid].dropped++;
+					}
+					break;
+				}
+
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				rte_prefetch0((void *)ol);
+
+				l2fwd_simple_crypto_enqueue(m, ol, cparams);
+			}
+
+			/* Dequeue packets from Crypto device */
+			nb_rx = rte_cryptodev_dequeue_burst(
+					cparams->dev_id, cparams->qp_id,
+					pkts_burst, MAX_PKT_BURST);
+			crypto_statistics[cparams->dev_id].dequeued += nb_rx;
+
+			/* Forward crypto'd packets */
+			for (j = 0; j < nb_rx; j++) {
+				m = pkts_burst[j];
+				rte_pktmbuf_offload_free(m->offload_ops);
+				rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+				l2fwd_simple_forward(m, portid);
+			}
+		}
+	}
+}
+
+static int
+l2fwd_launch_one_lcore(void *arg)
+{
+	l2fwd_main_loop((struct l2fwd_crypto_options *)arg);
+	return 0;
+}
+
+/* Display command line arguments usage */
+static void
+l2fwd_crypto_usage(const char *prgname)
+{
+	printf("%s [EAL options] -- --cdev TYPE [optional parameters]\n"
+		"  -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+		"  -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+		"  -s manage all ports from single lcore"
+		"  -t PERIOD: statistics will be refreshed each PERIOD seconds"
+		" (0 to disable, 10 default, 86400 maximum)\n"
+
+		"  --cdev AESNI_MB / QAT\n"
+		"  --chain HASH_CIPHER / CIPHER_HASH\n"
+
+		"  --cipher_algo ALGO\n"
+		"  --cipher_op ENCRYPT / DECRYPT\n"
+		"  --cipher_key KEY\n"
+
+		"  --auth ALGO\n"
+		"  --auth_op GENERATE / VERIFY\n"
+		"  --auth_key KEY\n"
+
+		"  --sessionless\n",
+	       prgname);
+}
+
+/** Parse crypto device type command line argument */
+static int
+parse_cryptodev_type(enum rte_cryptodev_type *type, char *optarg)
+{
+	if (strcmp("AESNI_MB", optarg) == 0) {
+		*type = RTE_CRYPTODEV_AESNI_MB_PMD;
+		return 0;
+	} else if (strcmp("QAT", optarg) == 0) {
+		*type = RTE_CRYPTODEV_QAT_PMD;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto chain xform command line argument */
+static int
+parse_crypto_opt_chain(struct l2fwd_crypto_options *options, char *optarg)
+{
+	if (strcmp("CIPHER_HASH", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+		return 0;
+	} else if (strcmp("HASH_CIPHER", optarg) == 0) {
+		options->xform_chain = L2FWD_CRYPTO_HASH_CIPHER;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse crypto cipher algo option command line argument */
+static int
+parse_cipher_algo(enum rte_crypto_cipher_algorithm *algo, char *optarg)
+{
+	if (strcmp("AES_CBC", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_CBC;
+		return 0;
+	} else if (strcmp("AES_GCM", optarg) == 0) {
+		*algo = RTE_CRYPTO_CIPHER_AES_GCM;
+		return 0;
+	}
+
+	printf("Cipher algorithm  not supported!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_cipher_op(enum rte_crypto_cipher_operation *op, char *optarg)
+{
+	if (strcmp("ENCRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+		return 0;
+	} else if (strcmp("DECRYPT", optarg) == 0) {
+		*op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+		return 0;
+	}
+
+	printf("Cipher operation not supported!\n");
+	return -1;
+}
+
+/** Parse crypto key command line argument */
+static int
+parse_key(struct rte_crypto_key *key __rte_unused,
+		unsigned length __rte_unused, char *arg __rte_unused)
+{
+	printf("Currently an unsupported argument!\n");
+	return -1;
+}
+
+/** Parse crypto cipher operation command line argument */
+static int
+parse_auth_algo(enum rte_crypto_auth_algorithm *algo, char *optarg)
+{
+	if (strcmp("SHA1", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1;
+		return 0;
+	} else if (strcmp("SHA1_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+		return 0;
+	} else if (strcmp("SHA224", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224;
+		return 0;
+	} else if (strcmp("SHA224_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA224_HMAC;
+		return 0;
+	} else if (strcmp("SHA256", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA256_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	} else if (strcmp("SHA512", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256;
+		return 0;
+	} else if (strcmp("SHA512_HMAC", optarg) == 0) {
+		*algo = RTE_CRYPTO_AUTH_SHA256_HMAC;
+		return 0;
+	}
+
+	printf("Authentication algorithm specified not supported!\n");
+	return -1;
+}
+
+static int
+parse_auth_op(enum rte_crypto_auth_operation *op, char *optarg)
+{
+	if (strcmp("VERIFY", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	} else if (strcmp("GENERATE", optarg) == 0) {
+		*op = RTE_CRYPTO_AUTH_OP_VERIFY;
+		return 0;
+	}
+
+	printf("Authentication operation specified not supported!\n");
+	return -1;
+}
+
+/** Parse long options */
+static int
+l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
+		struct option *lgopts, int option_index)
+{
+	if (strcmp(lgopts[option_index].name, "no_stats") == 0) {
+		options->no_stats_printing = 1;
+		return 0;
+	}
+
+	if (strcmp(lgopts[option_index].name, "cdev_type") == 0)
+		return parse_cryptodev_type(&options->cdev_type, optarg);
+
+	else if (strcmp(lgopts[option_index].name, "chain") == 0)
+		return parse_crypto_opt_chain(options, optarg);
+
+	/* Cipher options */
+	else if (strcmp(lgopts[option_index].name, "cipher_algo") == 0)
+		return parse_cipher_algo(&options->cipher_xform.cipher.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_op") == 0)
+		return parse_cipher_op(&options->cipher_xform.cipher.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "cipher_key") == 0)
+		return parse_key(&options->cipher_xform.cipher.key,
+				sizeof(options->ckey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "iv") == 0)
+		return parse_key(&options->iv_key, sizeof(options->ivkey_data),
+				optarg);
+
+	/* Authentication options */
+	else if (strcmp(lgopts[option_index].name, "auth_algo") == 0)
+		return parse_auth_algo(&options->cipher_xform.auth.algo,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_op") == 0)
+		return parse_auth_op(&options->cipher_xform.auth.op,
+				optarg);
+
+	else if (strcmp(lgopts[option_index].name, "auth_key") == 0)
+		return parse_key(&options->auth_xform.auth.key,
+				sizeof(options->akey_data), optarg);
+
+	else if (strcmp(lgopts[option_index].name, "sessionless") == 0) {
+		options->sessionless = 1;
+		return 0;
+	}
+
+	return -1;
+}
+
+/** Parse port mask */
+static int
+l2fwd_crypto_parse_portmask(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long pm;
+
+	/* parse hexadecimal string */
+	pm = strtoul(q_arg, &end, 16);
+	if ((pm == '\0') || (end == NULL) || (*end != '\0'))
+		pm = 0;
+
+	options->portmask = pm;
+	if (options->portmask == 0) {
+		printf("invalid portmask specified\n");
+		return -1;
+	}
+
+	return pm;
+}
+
+/** Parse number of queues */
+static int
+l2fwd_crypto_parse_nqueue(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	unsigned long n;
+
+	/* parse hexadecimal string */
+	n = strtoul(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+	else if (n >= MAX_RX_QUEUE_PER_LCORE)
+		n = 0;
+
+	options->nb_ports_per_lcore = n;
+	if (options->nb_ports_per_lcore == 0) {
+		printf("invalid number of ports selected\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse timer period */
+static int
+l2fwd_crypto_parse_timer_period(struct l2fwd_crypto_options *options,
+		const char *q_arg)
+{
+	char *end = NULL;
+	int n;
+
+	/* parse number string */
+	n = strtol(q_arg, &end, 10);
+	if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+		n = 0;
+
+	if (n >= MAX_TIMER_PERIOD)
+		n = 0;
+
+	options->refresh_period = n * 1000 * TIMER_MILLISECOND;
+	if (options->refresh_period == 0) {
+		printf("invalid refresh period specified\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Generate default options for application */
+static void
+l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
+{
+	srand(time(NULL));
+
+	options->portmask = 0xffffffff;
+	options->nb_ports_per_lcore = 1;
+	options->refresh_period = 10000;
+	options->single_lcore = 0;
+	options->no_stats_printing = 0;
+
+	options->cdev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+	options->sessionless = 0;
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+}
+
+static void
+l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
+{
+	printf("Options:-\nn");
+	printf("portmask: %x\n", options->portmask);
+	printf("ports per lcore: %u\n", options->nb_ports_per_lcore);
+	printf("refresh period : %u\n", options->refresh_period);
+	printf("single lcore mode: %s\n",
+			options->single_lcore ? "enabled" : "disabled");
+	printf("stats_printing: %s\n",
+			options->no_stats_printing ? "disabled" : "enabled");
+
+	switch (options->cdev_type) {
+	case RTE_CRYPTODEV_AESNI_MB_PMD:
+		printf("crytpodev type: AES-NI MB PMD\n"); break;
+	case RTE_CRYPTODEV_QAT_PMD:
+		printf("crytpodev type: QAT PMD\n"); break;
+	default:
+		break;
+	}
+
+	printf("sessionless crypto: %s\n",
+			options->sessionless ? "enabled" : "disabled");
+#if 0
+	options->xform_chain = L2FWD_CRYPTO_CIPHER_HASH;
+
+	/* Cipher Data */
+	options->cipher_xform.type = RTE_CRYPTO_XFORM_CIPHER;
+	options->cipher_xform.next = NULL;
+
+	options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
+	options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+
+	generate_random_key(options->ckey_data, sizeof(options->ckey_data));
+
+	options->cipher_xform.cipher.key.data = options->ckey_data;
+	options->cipher_xform.cipher.key.phys_addr = 0;
+	options->cipher_xform.cipher.key.length = 16;
+
+
+	/* Authentication Data */
+	options->auth_xform.type = RTE_CRYPTO_XFORM_AUTH;
+	options->auth_xform.next = NULL;
+
+	options->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
+	options->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	options->auth_xform.auth.add_auth_data_length = 0;
+	options->auth_xform.auth.digest_length = 20;
+
+	generate_random_key(options->akey_data, sizeof(options->akey_data));
+
+	options->auth_xform.auth.key.data = options->akey_data;
+	options->auth_xform.auth.key.phys_addr = 0;
+	options->auth_xform.auth.key.length = 20;
+#endif
+}
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
+		int argc, char **argv)
+{
+	int opt, retval, option_index;
+	char **argvopt = argv, *prgname = argv[0];
+
+	static struct option lgopts[] = {
+			{ "no_stats", no_argument, 0, 0 },
+			{ "sessionless", no_argument, 0, 0 },
+
+			{ "cdev_type", required_argument, 0, 0 },
+			{ "chain", required_argument, 0, 0 },
+
+			{ "cipher_algo", required_argument, 0, 0 },
+			{ "cipher_op", required_argument, 0, 0 },
+			{ "cipher_key", required_argument, 0, 0 },
+
+			{ "auth_algo", required_argument, 0, 0 },
+			{ "auth_op", required_argument, 0, 0 },
+			{ "auth_key", required_argument, 0, 0 },
+
+			{ "iv", required_argument, 0, 0 },
+
+			{ "sessionless", no_argument, 0, 0 },
+			{ NULL, 0, 0, 0 }
+	};
+
+	l2fwd_crypto_default_options(options);
+
+	while ((opt = getopt_long(argc, argvopt, "p:q:st:", lgopts,
+			&option_index)) != EOF) {
+		switch (opt) {
+		/* long options */
+		case 0:
+			retval = l2fwd_crypto_parse_args_long_options(options,
+					lgopts, option_index);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* portmask */
+		case 'p':
+			retval = l2fwd_crypto_parse_portmask(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* nqueue */
+		case 'q':
+			retval = l2fwd_crypto_parse_nqueue(options, optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		/* single  */
+		case 's':
+			options->single_lcore = 1;
+
+			break;
+
+		/* timer period */
+		case 't':
+			retval = l2fwd_crypto_parse_timer_period(options,
+					optarg);
+			if (retval < 0) {
+				l2fwd_crypto_usage(prgname);
+				return -1;
+			}
+			break;
+
+		default:
+			l2fwd_crypto_usage(prgname);
+			return -1;
+		}
+	}
+
+
+	if (optind >= 0)
+		argv[optind-1] = prgname;
+
+	retval = optind-1;
+	optind = 0; /* reset getopt lib */
+
+	return retval;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint8_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+	uint8_t portid, count, all_ports_up, print_flag = 0;
+	struct rte_eth_link link;
+
+	printf("\nChecking link status");
+	fflush(stdout);
+	for (count = 0; count <= MAX_CHECK_TIME; count++) {
+		all_ports_up = 1;
+		for (portid = 0; portid < port_num; portid++) {
+			if ((port_mask & (1 << portid)) == 0)
+				continue;
+			memset(&link, 0, sizeof(link));
+			rte_eth_link_get_nowait(portid, &link);
+			/* print link status if flag set */
+			if (print_flag == 1) {
+				if (link.link_status)
+					printf("Port %d Link Up - speed %u "
+						"Mbps - %s\n", (uint8_t)portid,
+						(unsigned)link.link_speed,
+				(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+					("full-duplex") : ("half-duplex\n"));
+				else
+					printf("Port %d Link Down\n",
+						(uint8_t)portid);
+				continue;
+			}
+			/* clear all_ports_up flag if any link down */
+			if (link.link_status == 0) {
+				all_ports_up = 0;
+				break;
+			}
+		}
+		/* after finally printing all link status, get out */
+		if (print_flag == 1)
+			break;
+
+		if (all_ports_up == 0) {
+			printf(".");
+			fflush(stdout);
+			rte_delay_ms(CHECK_INTERVAL);
+		}
+
+		/* set the print_flag if all ports up or timeout */
+		if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+			print_flag = 1;
+			printf("done\n");
+		}
+	}
+}
+
+static int
+initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports)
+{
+	unsigned i, cdev_id, cdev_count, enabled_cdev_count = 0;
+	int retval;
+
+	if (options->cdev_type == RTE_CRYPTODEV_QAT_PMD) {
+		if (rte_cryptodev_count() < nb_ports)
+			return -1;
+	} else if (options->cdev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+		for (i = 0; i < nb_ports; i++) {
+			int id = rte_eal_vdev_init(CRYPTODEV_NAME_AESNI_MB_PMD,
+					NULL);
+			if (id < 0)
+				return -1;
+		}
+	}
+
+	cdev_count = rte_cryptodev_count();
+	for (cdev_id = 0;
+			cdev_id < cdev_count && enabled_cdev_count < nb_ports;
+			cdev_id++) {
+		struct rte_cryptodev_qp_conf qp_conf;
+		struct rte_cryptodev_info dev_info;
+
+		struct rte_cryptodev_config conf = {
+			.nb_queue_pairs = 1,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 64
+			}
+		};
+
+		rte_cryptodev_info_get(cdev_id, &dev_info);
+
+		if (dev_info.dev_type != options->cdev_type)
+			continue;
+
+
+		retval = rte_cryptodev_configure(cdev_id, &conf);
+		if (retval < 0) {
+			printf("Failed to configure cryptodev %u", cdev_id);
+			return -1;
+		}
+
+		qp_conf.nb_descriptors = 2048;
+
+		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
+				SOCKET_ID_ANY);
+		if (retval < 0) {
+			printf("Failed to setup queue pair %u on cryptodev %u",
+					0, cdev_id);
+			return -1;
+		}
+
+		l2fwd_enabled_crypto_mask |= (1 << cdev_id);
+
+		enabled_cdev_count++;
+	}
+
+	return enabled_cdev_count;
+}
+
+static int
+initialize_ports(struct l2fwd_crypto_options *options)
+{
+	uint8_t last_portid, portid;
+	unsigned enabled_portcount = 0;
+	unsigned nb_ports = rte_eth_dev_count();
+
+	if (nb_ports == 0) {
+		printf("No Ethernet ports - bye\n");
+		return -1;
+	}
+
+	if (nb_ports > RTE_MAX_ETHPORTS)
+		nb_ports = RTE_MAX_ETHPORTS;
+
+	/* Reset l2fwd_dst_ports */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+		l2fwd_dst_ports[portid] = 0;
+
+	for (last_portid = 0, portid = 0; portid < nb_ports; portid++) {
+		int retval;
+
+		/* Skip ports that are not enabled */
+		if ((options->portmask & (1 << portid)) == 0)
+			continue;
+
+		/* init port */
+		printf("Initializing port %u... ", (unsigned) portid);
+		fflush(stdout);
+		retval = rte_eth_dev_configure(portid, 1, 1, &port_conf);
+		if (retval < 0) {
+			printf("Cannot configure device: err=%d, port=%u\n",
+				  retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one RX queue */
+		fflush(stdout);
+		retval = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+					     rte_eth_dev_socket_id(portid),
+					     NULL, l2fwd_pktmbuf_pool);
+		if (retval < 0) {
+			printf("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		/* init one TX queue on each port */
+		fflush(stdout);
+		retval = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+				rte_eth_dev_socket_id(portid),
+				NULL);
+		if (retval < 0) {
+			printf("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+				retval, (unsigned) portid);
+
+			return -1;
+		}
+
+		/* Start device */
+		retval = rte_eth_dev_start(portid);
+		if (retval < 0) {
+			printf("rte_eth_dev_start:err=%d, port=%u\n",
+					retval, (unsigned) portid);
+			return -1;
+		}
+
+		rte_eth_promiscuous_enable(portid);
+
+		rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+		printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+				(unsigned) portid,
+				l2fwd_ports_eth_addr[portid].addr_bytes[0],
+				l2fwd_ports_eth_addr[portid].addr_bytes[1],
+				l2fwd_ports_eth_addr[portid].addr_bytes[2],
+				l2fwd_ports_eth_addr[portid].addr_bytes[3],
+				l2fwd_ports_eth_addr[portid].addr_bytes[4],
+				l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+		/* initialize port stats */
+		memset(&port_statistics, 0, sizeof(port_statistics));
+
+		/* Setup port forwarding table */
+		if (enabled_portcount % 2) {
+			l2fwd_dst_ports[portid] = last_portid;
+			l2fwd_dst_ports[last_portid] = portid;
+		} else {
+			last_portid = portid;
+		}
+
+		l2fwd_enabled_port_mask |= (1 << portid);
+		enabled_portcount++;
+	}
+
+	if (enabled_portcount == 1) {
+		l2fwd_dst_ports[last_portid] = last_portid;
+	} else if (enabled_portcount % 2) {
+		printf("odd number of ports in portmask- bye\n");
+		return -1;
+	}
+
+	check_all_ports_link_status(nb_ports, l2fwd_enabled_port_mask);
+
+	return enabled_portcount;
+}
+
+int
+main(int argc, char **argv)
+{
+	struct lcore_queue_conf *qconf;
+	struct l2fwd_crypto_options options;
+
+	uint8_t nb_ports, nb_cryptodevs, portid, cdev_id;
+	unsigned lcore_id, rx_lcore_id;
+	int ret, enabled_cdevcount, enabled_portcount;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+	argc -= ret;
+	argv += ret;
+
+	/* parse application arguments (after the EAL ones) */
+	ret = l2fwd_crypto_parse_args(&options, argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L2FWD-CRYPTO arguments\n");
+
+	/* create the mbuf pool */
+	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 128,
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+	if (l2fwd_pktmbuf_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+	/* create crypto op pool */
+	l2fwd_mbuf_ol_pool = rte_pktmbuf_offload_pool_create(
+			"mbuf_offload_pool", NB_MBUF, 128, 0, rte_socket_id());
+	if (l2fwd_mbuf_ol_pool == NULL)
+		rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
+
+	/* Enable Ethernet ports */
+	enabled_portcount = initialize_ports(&options);
+	if (enabled_portcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial Ethernet ports\n");
+
+	nb_ports = rte_eth_dev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, portid = 0;
+			portid < nb_ports; portid++) {
+
+		/* skip ports that are not enabled */
+		if ((options.portmask & (1 << portid)) == 0)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_rx_ports ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->rx_port_list[qconf->nb_rx_ports] = portid;
+		qconf->nb_rx_ports++;
+
+		printf("Lcore %u: RX port %u\n", rx_lcore_id, (unsigned)portid);
+	}
+
+
+	/* Enable Crypto devices */
+	enabled_cdevcount = initialize_cryptodevs(&options, enabled_portcount);
+	if (enabled_cdevcount < 1)
+		rte_exit(EXIT_FAILURE, "Failed to initial crypto devices\n");
+
+	nb_cryptodevs = rte_cryptodev_count();
+	/* Initialize the port/queue configuration of each logical core */
+	for (rx_lcore_id = 0, qconf = NULL, cdev_id = 0;
+			cdev_id < nb_cryptodevs && enabled_cdevcount;
+			cdev_id++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(cdev_id, &info);
+
+		/* skip devices of the wrong type */
+		if (options.cdev_type != info.dev_type)
+			continue;
+
+		if (options.single_lcore && qconf == NULL) {
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		} else if (!options.single_lcore) {
+			/* get the lcore_id for this port */
+			while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+			       lcore_queue_conf[rx_lcore_id].nb_crypto_devs ==
+			       options.nb_ports_per_lcore) {
+				rx_lcore_id++;
+				if (rx_lcore_id >= RTE_MAX_LCORE)
+					rte_exit(EXIT_FAILURE,
+							"Not enough cores\n");
+			}
+		}
+
+		/* Assigned a new logical core in the loop above. */
+		if (qconf != &lcore_queue_conf[rx_lcore_id])
+			qconf = &lcore_queue_conf[rx_lcore_id];
+
+		qconf->cryptodev_list[qconf->nb_crypto_devs] = cdev_id;
+		qconf->nb_crypto_devs++;
+
+		enabled_cdevcount--;
+
+		printf("Lcore %u: cryptodev %u\n", rx_lcore_id,
+				(unsigned)cdev_id);
+	}
+
+
+
+	/* launch per-lcore init on every lcore */
+	rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options,
+			CALL_MASTER);
+	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+		if (rte_eal_wait_lcore(lcore_id) < 0)
+			return -1;
+	}
+
+	return 0;
+}
-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework
  2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
                                 ` (9 preceding siblings ...)
  2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 10/10] l2fwd-crypto: crypto Declan Doherty
@ 2015-11-25 17:44               ` Thomas Monjalon
  10 siblings, 0 replies; 115+ messages in thread
From: Thomas Monjalon @ 2015-11-25 17:44 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev

2015-11-25 13:25, Declan Doherty:
> This series of patches defines a set of application burst oriented APIs for
> asynchronous symmetric cryptographic functions within DPDK. It also contains a
> poll mode driver cryptographic device framework for the implementation of
> crypto devices within DPDK.
> 
> In the patch set we also have included 2 reference implementations of crypto
> PMDs. Currently both implementations support AES-CBC with
> HMAC_SHA1/SHA256/SHA512 authentication operations. The first device is a purely
> software PMD based on Intel's multi-buffer library, which utilises both
> AES-NI instructions and vector operations to accelerate crypto operations and
> the second PMD utilises Intel's Quick Assist Technology (on DH895xxC) to
> provide hardware accelerated crypto operations.

After rebase, small fixes in configs and MAINTAINERS,
Applied, thanks

It is marked as experimental in the documentation and configs.
The API needs to be more tested, discussed and documented.
So it is not stable and no deprecation process is needed to make some changes.

^ permalink raw reply	[flat|nested] 115+ messages in thread

end of thread, other threads:[~2015-11-25 17:45 UTC | newest]

Thread overview: 115+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-02 23:01 [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
2015-10-02 23:01 ` [dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-10-21  9:24   ` Thomas Monjalon
2015-10-21 11:16     ` Declan Doherty
2015-10-02 23:01 ` [dpdk-dev] [PATCH 2/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-10-02 23:01 ` [dpdk-dev] [PATCH 3/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-10-02 23:01 ` [dpdk-dev] [PATCH 4/6] docs: add getting started guides for multi-buffer pmd and qat pmd Declan Doherty
2015-10-21 11:34   ` Thomas Monjalon
2015-10-02 23:01 ` [dpdk-dev] [PATCH 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
2015-10-02 23:01 ` [dpdk-dev] [PATCH 6/6] l2fwd-crypto: crypto Declan Doherty
2015-10-21  9:11 ` [dpdk-dev] [PATCH 0/6] Crypto API and device framework Declan Doherty
2015-10-30 12:59 ` [dpdk-dev] [PATCH v2 " Declan Doherty
2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
2015-10-30 12:59   ` [dpdk-dev] [PATCH v2 6/6] l2fwd-crypto: crypto Declan Doherty
2015-10-30 16:08   ` [dpdk-dev] [PATCH v3 0/6] Crypto API and device framework Declan Doherty
2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-10-30 16:34       ` Ananyev, Konstantin
2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
2015-10-30 16:08     ` [dpdk-dev] [PATCH v3 6/6] l2fwd-crypto: crypto Declan Doherty
2015-11-03 17:45     ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Declan Doherty
2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 1/6] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 2/6] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 3/6] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 4/6] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 5/6] app/test: add cryptodev unit and performance tests Declan Doherty
2015-11-03 17:45       ` [dpdk-dev] [PATCH v4 6/6] l2fwd-crypto: crypto Declan Doherty
2015-11-03 21:20       ` [dpdk-dev] [PATCH v4 0/6] Crypto API and device framework Sergio Gonzalez Monroy
2015-11-09 20:34       ` [dpdk-dev] [PATCH v5 00/10] " Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
2015-11-10 10:30           ` Bruce Richardson
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 02/10] ethdev: make error checking macros public Declan Doherty
2015-11-10 10:32           ` Bruce Richardson
2015-11-10 15:50           ` Adrien Mazarguil
2015-11-10 17:00             ` Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
2015-11-09 20:34         ` [dpdk-dev] [PATCH v5 10/10] l2fwd-crypto: crypto Declan Doherty
2015-11-10 17:32         ` [dpdk-dev] [PATCH v6 00/10] Crypto API and device framework Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 02/10] ethdev: make error checking macros public Declan Doherty
2015-11-10 17:38             ` Adrien Mazarguil
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
2015-11-13 15:35             ` Thomas Monjalon
2015-11-13 15:41               ` Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-11-13 15:44             ` Thomas Monjalon
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-11-13 15:59             ` Thomas Monjalon
2015-11-13 16:11             ` Thomas Monjalon
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-11-13 16:00             ` Thomas Monjalon
2015-11-13 16:25               ` Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
2015-11-10 17:32           ` [dpdk-dev] [PATCH v6 10/10] l2fwd-crypto: crypto Declan Doherty
2015-11-13 16:03             ` Thomas Monjalon
2015-11-13 18:58           ` [dpdk-dev] [PATCH v7 00/10] Crypto API and device framework Declan Doherty
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
2015-11-17 14:44               ` Declan Doherty
2015-11-17 15:39                 ` Thomas Monjalon
2015-11-17 16:04               ` [dpdk-dev] [PATCH v7.1 " Declan Doherty
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 02/10] ethdev: make error checking macros public Declan Doherty
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
2015-11-25  0:25               ` Thomas Monjalon
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-11-25  0:32               ` Thomas Monjalon
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-11-20 15:27               ` Olivier MATZ
2015-11-20 17:26                 ` Declan Doherty
2015-11-23  9:10                   ` Olivier MATZ
2015-11-23 11:52                     ` Ananyev, Konstantin
2015-11-23 12:16                       ` Declan Doherty
2015-11-23 13:08                         ` Olivier MATZ
2015-11-23 14:17                           ` Thomas Monjalon
2015-11-23 14:46                             ` Thomas Monjalon
2015-11-23 15:47                               ` Declan Doherty
2015-11-23 14:33                           ` Declan Doherty
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-11-25  1:00               ` Thomas Monjalon
2015-11-25  9:16                 ` Mcnamara, John
2015-11-25 10:34               ` Thomas Monjalon
2015-11-25 10:49                 ` Thomas Monjalon
2015-11-25 10:59                   ` Declan Doherty
2015-11-25 12:01               ` Mcnamara, John
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-11-25 10:32               ` Thomas Monjalon
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
2015-11-13 18:58             ` [dpdk-dev] [PATCH v7 10/10] l2fwd-crypto: crypto Declan Doherty
2015-11-25  1:03               ` Thomas Monjalon
2015-11-25 13:25             ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 01/10] ethdev: rename macros to have RTE_ prefix Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 02/10] ethdev: make error checking macros public Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 03/10] eal: add __rte_packed /__rte_aligned macros Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 04/10] mbuf: add new marcos to get the physical address of data Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 05/10] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 06/10] mbuf_offload: library to support attaching offloads to a mbuf Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 07/10] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 08/10] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 09/10] app/test: add cryptodev unit and performance tests Declan Doherty
2015-11-25 13:25               ` [dpdk-dev] [PATCH v8 10/10] l2fwd-crypto: crypto Declan Doherty
2015-11-25 17:44               ` [dpdk-dev] [PATCH v8 00/10] Crypto API and device framework Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).