DPDK patches and discussions
 help / color / mirror / Atom feed
From: Declan Doherty <declan.doherty@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release
Date: Thu, 20 Aug 2015 15:07:20 +0100	[thread overview]
Message-ID: <1440079643-5437-2-git-send-email-declan.doherty@intel.com> (raw)
In-Reply-To: <1440079643-5437-1-git-send-email-declan.doherty@intel.com>

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: John Griffin <john.griffin@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.

features include:
 - Crypto device configuration / management APIs
 - Definitions of supported cipher algorithms and operations.
 - Definitions of supported hash/authentication algorithms and
   operations.
 - Crypto session management APIs
 - Crypto operation data structures and APIs allocation of crypto
   operation structure used to specify the crypto operations to
   be performed  on a particular mbuf.
 - Extension of mbuf to contain crypto operation data pointer and
   extra flags.
 - Burst enqueue / dequeue APIs for processing of crypto operations.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                        |   9 +-
 config/common_linuxapp                      |   7 +
 doc/api/doxy-api-index.md                   |   1 +
 doc/api/doxy-api.conf                       |   1 +
 lib/Makefile                                |   1 +
 lib/librte_cryptodev/Makefile               |  60 ++
 lib/librte_cryptodev/rte_crypto.h           | 649 +++++++++++++++++++
 lib/librte_cryptodev/rte_crypto_version.map |  40 ++
 lib/librte_cryptodev/rte_cryptodev.c        | 966 ++++++++++++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h        | 550 ++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev_pmd.h    | 622 ++++++++++++++++++
 lib/librte_eal/common/include/rte_log.h     |   1 +
 lib/librte_eal/common/include/rte_memory.h  |  14 +-
 lib/librte_mbuf/rte_mbuf.c                  |   1 +
 lib/librte_mbuf/rte_mbuf.h                  |  51 ++
 mk/rte.app.mk                               |   1 +
 16 files changed, 2971 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_cryptodev/Makefile
 create mode 100644 lib/librte_cryptodev/rte_crypto.h
 create mode 100644 lib/librte_cryptodev/rte_crypto_version.map
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.c
 create mode 100644 lib/librte_cryptodev/rte_cryptodev.h
 create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h

diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..ed30180 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -147,6 +147,13 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
+CONFIG_RTE_MAX_CRYPTOPORTS=32
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..12a75c6 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -145,6 +145,13 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
 CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
 
 #
+# Compile generic Crypto device library
+#
+CONFIG_RTE_LIBRTE_CRYPTODEV=y
+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
+CONFIG_RTE_MAX_CRYPTODEVS=64
+
+#
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 72ac3c4..bdb6130 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:
   [dev]                (@ref rte_dev.h),
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
+  [cryptodev]          (@ref rte_cryptodev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index cfb4627..7244b8f 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cfgfile \
                           lib/librte_cmdline \
                           lib/librte_compat \
+                          lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
                           lib/librte_hash \
diff --git a/lib/Makefile b/lib/Makefile
index 2055539..9e5f484 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
new file mode 100644
index 0000000..6ed9b76
--- /dev/null
+++ b/lib/librte_cryptodev/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = libcryptodev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_cryptodev.c
+
+# export include files
+SYMLINK-y-include += rte_crypto.h
+SYMLINK-y-include += rte_cryptodev.h
+SYMLINK-y-include += rte_cryptodev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_cryptodev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mempool
+DEPDIRS-y += lib/librte_ring
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
new file mode 100644
index 0000000..b776609
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -0,0 +1,649 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_H_
+#define _RTE_CRYPTO_H_
+
+/**
+ * @file rte_crypto.h
+ *
+ * RTE Cryptographic Definitions
+ *
+ * Defines symmetric cipher and authentication algorithms and modes, as well
+ * as supported symmetric crypto operation combinations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+/**
+ * This enumeration lists different types of crypto operations supported by rte
+ * crypto devices. The operation type is defined during session registration and
+ * cannot be changed for a session once it has been setup, or if using a
+ * session-less crypto operation it is defined within the crypto operation
+ * op_params.
+ */
+enum rte_crypto_operation_chain {
+	RTE_CRYPTO_SYM_OP_CIPHER_ONLY,
+	/**< Cipher only operation on the data */
+	RTE_CRYPTO_SYM_OP_HASH_ONLY,
+	/**< Hash only operation on the data */
+	RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER,
+	/**<
+	 * Chain a hash followed by any cipher operation.
+	 *
+	 * If it is required that the result of the hash (i.e. the digest)
+	 * is going to be included in the data to be ciphered, then:
+	 *
+	 * - The digest MUST be placed in the destination buffer at the
+	 *   location corresponding to the end of the data region to be hashed
+	 *   (hash_start_offset + message length to hash),  i.e. there must be
+	 *   no gaps between the start of the digest and the end of the data
+	 *   region to be hashed.
+	 *
+	 * - The message length to cipher member of the rte_crypto_op_data
+	 *   structure must be equal to the overall length of the plain text,
+	 *   the digest length and any (optional) trailing data that is to be
+	 *   included.
+	 *
+	 * - The message length to cipher must be a multiple to the block
+	 *   size if a block cipher is being used - the implementation does not
+	 *   pad.
+	 */
+	RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH,
+	/**<
+	 * Chain any cipher followed by any hash operation.The hash operation
+	 * will be performed on the ciphertext resulting from the cipher
+	 * operation.
+	 */
+};
+
+/** Symmetric Cipher Algorithms */
+enum rte_crypto_cipher_algorithm {
+	RTE_CRYPTO_SYM_CIPHER_NULL = 1,
+	/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */
+
+	RTE_CRYPTO_SYM_CIPHER_3DES_CBC,
+	/**< Triple DES algorithm in CBC mode */
+	RTE_CRYPTO_SYM_CIPHER_3DES_CTR,
+	/**< Triple DES algorithm in CTR mode */
+	RTE_CRYPTO_SYM_CIPHER_3DES_ECB,
+	/**< Triple DES algorithm in ECB mode */
+
+	RTE_CRYPTO_SYM_CIPHER_AES_CBC,
+	/**< AES algorithm in CBC mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_CCM,
+	/**< AES algorithm in CCM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_HASH_AES_CCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation
+	 */
+	RTE_CRYPTO_SYM_CIPHER_AES_CTR,
+	/**< AES algorithm in Counter mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_ECB,
+	/**< AES algorithm in ECB mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_F8,
+	/**< AES algorithm in F8 mode */
+	RTE_CRYPTO_SYM_CIPHER_AES_GCM,
+	/**< AES algorithm in CGM mode. When this cipher algorithm is used the
+	 * *RTE_CRYPTO_SYM_CIPHER_AES_GCM* element of the
+	 * *rte_crypto_hash_algorithm* enum MUST be used to set up the related
+	 * *rte_crypto_hash_setup_data* structure in the session context or in
+	 * the op_params of the crypto operation structure in the case of a
+	 * session-less crypto operation.
+	 */
+	RTE_CRYPTO_SYM_CIPHER_AES_XTS,
+	/**< AES algorithm in XTS mode */
+
+	RTE_CRYPTO_SYM_CIPHER_ARC4,
+	/**< (A)RC4 cipher algorithm */
+
+	RTE_CRYPTO_SYM_CIPHER_KASUMI_F8,
+	/**< Kasumi algorithm in F8 mode */
+
+	RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2,
+	/**< SNOW3G algorithm in UEA2 mode */
+
+	RTE_CRYPTO_SYM_CIPHER_ZUC_EEA3
+	/**< ZUC algorithm in EEA3 mode */
+};
+
+/** Symmetric Cipher Direction */
+enum rte_crypto_cipher_operation {
+	RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT,
+	/**< Encrypt cipher operation */
+	RTE_CRYPTO_SYM_CIPHER_OP_DECRYPT
+	/**< Decrypt cipher operation */
+};
+
+/** Crypto key structure */
+struct rte_crypto_key {
+	uint8_t *data;	/**< pointer to key data */
+	size_t length;	/**< key length in bytes */
+};
+
+/**
+ * Symmetric Cipher Setup Data.
+ *
+ * This structure contains data relating to Cipher (Encryption and Decryption)
+ *  use to create a session.
+ */
+struct rte_crypto_cipher_params {
+	enum rte_crypto_cipher_operation op;
+	/**< This parameter determines if the cipher operation is an encrypt or
+	 * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,
+	 * only encrypt operations are valid. */
+	enum rte_crypto_cipher_algorithm algo;
+	/**< Cipher algorithm */
+
+	struct rte_crypto_key key;
+	/**< Cipher key
+	 *
+	 * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.data will
+	 * point to a concatenation of the AES encryption key followed by a
+	 * keymask. As per RFC3711, the keymask should be padded with trailing
+	 * bytes to match the length of the encryption key used.
+	 *
+	 * For AES-XTS mode of operation, two keys must be provided and
+	 * key.data must point to the two keys concatenated together (Key1 ||
+	 * Key2). The cipher key length will contain the total size of both keys.
+	 *
+	 * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),
+	 * 192 bits (24 bytes) or 256 bits (32 bytes).
+	 *
+	 * For the CCM mode of operation, the only supported key length is 128
+	 * bits (16 bytes).
+	 *
+	 * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.length
+	 * should be set to the combined length of the encryption key and the
+	 * keymask. Since the keymask and the encryption key are the same size,
+	 * key.length should be set to 2 x the AES encryption key length.
+	 *
+	 * For the AES-XTS mode of operation:
+	 *  - Two keys must be provided and key.length refers to total length of
+	 *    the two keys.
+	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
+	 *  - Both keys must have the same size.
+	 **/
+};
+
+/** Symmetric Hash / Authentication Algorithms */
+enum rte_crypto_hash_algorithm {
+	RTE_CRYPTO_SYM_HASH_NONE = 0,
+	/**< No hash algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_AES_CBC_MAC,
+	/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */
+	RTE_CRYPTO_SYM_HASH_AES_CCM,
+	/**< AES algorithm in CCM mode. This is an authenticated cipher. When
+	 * this hash algorithm is used, the *RTE_CRYPTO_SYM_CIPHER_AES_CCM*
+	 * element of the *rte_crypto_cipher_algorithm* enum MUST be used to
+	 * set up the related rte_crypto_cipher_setup_data structure in the
+	 * session context or the corresponding parameter in the crypto operation
+	 * data structures op_params parameter MUST be set for a session-less
+	 * crypto operation.
+	 * */
+	RTE_CRYPTO_SYM_HASH_AES_CMAC,
+	/**< AES CMAC algorithm. */
+	RTE_CRYPTO_SYM_HASH_AES_GCM,
+	/**< AES algorithm in GCM mode. When this hash algorithm
+	 * is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the
+	 * rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	 * rte_crypto_cipher_setup_data structure in the session context, or
+	 * the corresponding parameter in the crypto operation data structures
+	 * op_params parameter MUST be set for a session-less crypto operation.
+	 */
+	RTE_CRYPTO_SYM_HASH_AES_GMAC,
+	/**< AES GMAC algorithm. When this hash algorithm
+	* is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the
+	* rte_crypto_cipher_algorithm enum MUST be used to set up the related
+	* rte_crypto_cipher_setup_data structure in the session context,  or
+	* the corresponding parameter in the crypto operation data structures
+	* op_params parameter MUST be set for a session-less crypto operation.
+	*/
+	RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC,
+	/**< AES XCBC algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_KASUMI_F9,
+	/**< Kasumi algorithm in F9 mode. */
+
+	RTE_CRYPTO_SYM_HASH_MD5,
+	/**< MD5 algorithm */
+	RTE_CRYPTO_SYM_HASH_MD5_HMAC,
+	/**< HMAC using MD5 algorithm */
+
+	RTE_CRYPTO_SYM_HASH_SHA1,
+	/**< 128 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA1_HMAC,
+	/**< HMAC using 128 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA224,
+	/**< 224 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA224_HMAC,
+	/**< HMAC using 224 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA256,
+	/**< 256 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA256_HMAC,
+	/**< HMAC using 256 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA384,
+	/**< 384 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA384_HMAC,
+	/**< HMAC using 384 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA512,
+	/**< 512 bit SHA algorithm. */
+	RTE_CRYPTO_SYM_HASH_SHA512_HMAC,
+	/**< HMAC using 512 bit SHA algorithm. */
+
+	RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2,
+	/**< SNOW3G algorithm in UIA2 mode. */
+
+	RTE_CRYPTO_SYM_HASH_ZUC_EIA3,
+	/**< ZUC algorithm in EIA3 mode */
+};
+
+/** Symmetric Hash Operations */
+enum rte_crypto_hash_operation {
+	RTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY,	/**< Verify digest */
+	RTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE	/**< Generate digest */
+};
+
+/**
+ * Hash Setup Data.
+ *
+ * This structure contains data relating to a hash session. The fields hash_algorithm, hash_mode and digest_result_len are common to all
+ *      three hash modes and MUST be set for each mode.
+ *
+ *****************************************************************************/
+struct rte_crypto_hash_params {
+	enum rte_crypto_hash_operation op;
+	/* hash operation type */
+	enum rte_crypto_hash_algorithm algo;
+	/* hashing algorithm selection */
+
+	struct rte_crypto_key auth_key;
+	/**< Authentication key data.
+	 * The authentication key length MUST be less than or equal to the
+	 * block size of the algorithm. It is the callers responsibility to
+	 * ensure that the key length is compliant with the standard being used
+	 * (for example RFC 2104, FIPS 198a).
+	 */
+
+	uint32_t digest_length;
+	/**< Length of the digest to be returned. If the verify option is set,
+	 * this specifies the length of the digest to be compared for the
+	 * session.
+	 *
+	 * If the value is less than the maximum length allowed by the hash,
+	 * the result shall be truncated.  If the value is greater than the
+	 * maximum length allowed by the hash then an error will be generated
+	 * by *rte_cryptodev_session_create* or by the
+	 * *rte_cryptodev_enqueue_burst* if using session-less APIs.
+	 */
+
+	uint32_t add_auth_data_length;
+	/**< The length of the additional authenticated data (AAD) in bytes.
+	 * The maximum permitted value is 240 bytes, unless otherwise specified
+	 * below.
+	 *
+	 * This field must be specified when the hash algorithm is one of the
+	 * following:
+	 *
+	 * - For SNOW3G (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2), this is the
+	 *   length of the IV (which should be 16).
+	 *
+	 * - For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM).  In this case, this is
+	 *   the length of the Additional Authenticated Data (called A, in NIST
+	 *   SP800-38D).
+	 *
+	 * - For CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM).  In this case, this is
+	 *   the length of the associated data (called A, in NIST SP800-38C).
+	 *   Note that this does NOT include the length of any padding, or the
+	 *   18 bytes reserved at the start of the above field to store the
+	 *   block B0 and the encoded length.  The maximum permitted value in
+	 *   this case is 222 bytes.
+	 *
+	 * @note
+	 *  For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of operation
+	 *  this field is not used and should be set to 0. Instead the length
+	 *  of the AAD data is specified in the message length to hash field of
+	 *  the rte_crypto_op_data structure.
+	 */
+};
+
+/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is include in the operation data structure op_params.
+ */
+enum rte_crypto_op_sess_type {
+	RTE_CRYPTO_OP_WITH_SESSION,	/**< Session based crypto operation */
+	RTE_CRYPTO_OP_SESSIONLESS	/**< Session-less crypto operation */
+};
+
+/**
+ * Cryptographic Operation Data.
+ *
+ * This structure contains data relating to performing cryptographic processing
+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call
+ * for performing cipher, hash, or a combined hash and cipher operations.
+ */
+struct rte_crypto_op_data {
+	enum rte_crypto_op_sess_type type;
+
+	struct rte_mbuf *dst;
+
+	union {
+		struct rte_cryptodev_session *session;
+		/**< Handle for the initialised session context */
+		struct {
+			struct rte_crypto_cipher_params cipher;
+			struct rte_crypto_hash_params hash;
+			enum rte_crypto_operation_chain opchain;
+		} op_params;
+		/**< Session-less API crypto operation parameters */
+	};
+
+	struct  {
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for cipher processing, specified
+			  * as number of bytes from start of data in the source
+			  * buffer. The result of the cipher operation will be
+			  * written back into the output buffer starting at
+			  * this location. */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer
+			  * on which the cryptographic operation will be
+			  * computed. This must be a multiple of the block size
+			  * if a block cipher is being used. This is also the
+			  * same as the result length.
+			  *
+			  * @note
+			  * In the case of CCM @ref RTE_CRYPTO_SYM_HASH_AES_CCM,
+			  * this value should not include the length of the
+			  * padding or the length of the MAC; the driver will
+			  * compute the actual number of bytes over which the
+			  * encryption will occur, which will include these
+			  * values.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC, this
+			  * field should be set to 0.
+			  */
+		} to_cipher; /**< Data offsets and length for ciphering */
+
+		struct {
+			 uint32_t offset;
+			 /**< Starting point for hash processing, specified as
+			  * number of bytes from start of packet in source
+			  * buffer.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field
+			  * should be set instead.
+			  *
+			  * @note For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of
+			  * operation, this field specifies the start of the AAD data in
+			  * the source buffer.
+			  */
+
+			 uint32_t length;
+			 /**< The message length, in bytes, of the source buffer that
+			  * the hash will be computed on.
+			  *
+			  * @note
+			  * For CCM and GCM modes of operation, this field is
+			  * ignored. The field @ref additional_auth field should
+			  * be set instead.
+			  *
+			  * @note
+			  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC mode
+			  * of operation, this field specifies the length of
+			  * the AAD data in the source buffer.
+			  */
+		} to_hash; /**< Data offsets and length for authentication */
+	} data;	/**< Details of data to be operated on */
+
+	struct {
+		uint8_t *data;
+		/**< Initialisation Vector or Counter.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the Initialisation
+		 * Vector (IV) value.
+		 *
+		 * - For block ciphers in CTR mode, this is the counter.
+		 *
+		 * - For GCM mode, this is either the IV (if the length is 96
+		 * bits) or J0 (for other sizes), where J0 is as defined by
+		 * NIST SP800-38D. Regardless of the IV length, a full 16 bytes
+		 * needs to be allocated.
+		 *
+		 * - For CCM mode, the first byte is reserved, and the nonce
+		 * should be written starting at &iv[1] (to allow space for the
+		 * implementation to write in the flags in the first byte).
+		 * Note that a full 16 bytes should be allocated, even though
+		 * the length field will have a value less than this.
+		 *
+		 * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std
+		 * 1619-2007.
+		 *
+		 * For optimum performance, the data pointed to SHOULD be
+		 * 8-byte aligned.
+		 */
+		phys_addr_t phys_addr;
+		size_t length;
+		/**< Length of valid IV data.
+		 *
+		 * - For block ciphers in CBC or F8 mode, or for Kasumi in F8
+		 * mode, or for SNOW3G in UEA2 mode, this is the length of the
+		 * IV (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For block ciphers in CTR mode, this is the length of the
+		 * counter (which must be the same as the block length of the
+		 * cipher).
+		 *
+		 * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in
+		 * which case data points to J0.
+		 *
+		 * - For CCM mode, this is the length of the nonce, which can
+		 * be in the range 7 to 13 inclusive.
+		 */
+	} iv;	/**< Initialisation vector parameters */
+
+	struct {
+		uint8_t *data;
+		/**< If this member of this structure is set this is a
+		 * pointer to the location where the digest result should be
+		 * inserted (in the case of digest generation) or where the
+		 * purported digest exists (in the case of digest
+		 * verification).
+		 *
+		 * At session creation time, the client specified the digest
+		 * result length with the digest_length member of the @ref
+		 * rte_crypto_hash_setup_data structure. For physical crypto
+		 * devices the caller must allocate at least digest_length of
+		 * physically contiguous memory at this location.
+		 *
+		 * For digest generation, the digest result will overwrite
+		 * any data at this location.
+		 *
+		 * @note
+		 * For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), for
+		 * "digest result" read "authentication tag T".
+		 *
+		 * If this member is not set the digest result is understood
+		 * to be in the destination buffer for digest generation, and
+		 * in the source buffer for digest verification. The location
+		 * of the digest result in this case is immediately following
+		 * the region over which the digest is computed.
+		 */
+		phys_addr_t phys_addr;	/**< Physical address of digest */
+		uint32_t length;	/**< Length of digest */
+	} digest; /**< Digest parameters */
+
+	struct {
+		uint8_t *data;
+		/**< Pointer to Additional Authenticated Data (AAD) needed for
+		 * authenticated cipher mechanisms (CCM and GCM), and to the IV
+		 * for SNOW3G authentication
+		 * (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2). For other
+		 * authentication mechanisms this pointer is ignored.
+		 *
+		 * The length of the data pointed to by this field is set up for
+		 * the session in the @ref rte_crypto_hash_params structure
+		 * as part of the @ref rte_cryptodev_session_create function
+		 * call.  This length must not exceed 240 bytes.
+		 *
+		 * Specifically for CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the nonce should be written starting at an offset of one
+		 *   byte into the array, leaving room for the implementation
+		 *   to write in the flags to the first byte.
+		 *
+		 * - the additional  authentication data itself should be
+		 *   written starting at an offset of 18 bytes into the array,
+		 *   leaving room for the length encoding in the first two
+		 *   bytes of the second block.
+		 *
+		 * - the array should be big enough to hold the above fields,
+		 *   plus any padding to round this up to the nearest multiple
+		 *   of the block size (16 bytes).  Padding will be added by the
+		 *   implementation.
+		 *
+		 * Finally, for GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), the
+		 * caller should setup this field as follows:
+		 *
+		 * - the AAD is written in starting at byte 0
+		 * - the array must be big enough to hold the AAD, plus any
+		 *   padding to round this up to the nearest multiple of the
+		 *   block size (16 bytes).  Padding will be added by the
+		 *    implementation.
+		 *
+		 * @note
+		 * For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of
+		 * operation, this field is not used and should be set to 0.
+		 * Instead the AAD data should be placed in the source buffer.
+		 */
+		phys_addr_t phys_addr;	/**< physical address */
+	} additional_auth; /**< Additional authentication parameters */
+
+	struct rte_mempool *pool;	/**< mempool used to allocate crypto op */
+};
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void
+rte_crypto_op_reset(struct rte_crypto_op_data *op)
+{
+
+	op->type = RTE_CRYPTO_OP_SESSIONLESS;
+}
+
+static inline struct rte_crypto_op_data *
+__rte_crypto_op_raw_alloc(struct rte_mempool *mp)
+{
+	void *buf = NULL;
+
+	if (rte_mempool_get(mp, &buf) < 0)
+		return NULL;
+
+	return (struct rte_crypto_op_data *)buf;
+}
+
+/**
+ * Create an crypto operation structure which is used to define the crypto
+ * operation processing which is to be done on a packet.
+ *
+ * @param	dev_id		Device identifier
+ * @param	m_src		Source mbuf of data for processing.
+ * @param	m_dst		Destination mbuf for processed data. Can be NULL
+ *				if crypto operation is done in place.
+ */
+static inline struct rte_crypto_op_data *
+rte_crypto_op_alloc(struct rte_mempool *mp)
+{
+	struct rte_crypto_op_data *op;
+
+	if ((op = __rte_crypto_op_raw_alloc(mp)) != NULL)
+		rte_crypto_op_reset(op);
+	return op;
+}
+
+
+/**
+ * Free operation structure free function
+ *
+ * @param	op	Crypto operation data structure to be freed
+ */
+static inline void
+rte_crypto_op_free(struct rte_crypto_op_data *op)
+{
+	if (op != NULL) {
+		rte_mempool_put(op->pool, op);
+	}
+}
+
+extern struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
+		int socket_id);
+
+static inline void
+rte_crypto_op_attach_session(struct rte_crypto_op_data *op,
+		struct rte_cryptodev_session *sess)
+{
+	op->session = sess;
+	op->type = RTE_CRYPTO_OP_WITH_SESSION;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTO_H_ */
diff --git a/lib/librte_cryptodev/rte_crypto_version.map b/lib/librte_cryptodev/rte_crypto_version.map
new file mode 100644
index 0000000..c93fcad
--- /dev/null
+++ b/lib/librte_cryptodev/rte_crypto_version.map
@@ -0,0 +1,40 @@
+DPDK_2.2 {
+	global:
+
+	rte_cryptodev_create_vdev;
+	rte_cryptodev_get_dev_id;
+	rte_cryptodev_count;
+	rte_cryptodev_configure;
+	rte_cryptodev_start;
+	rte_cryptodev_stop;
+	rte_cryptodev_close;
+	rte_cryptodev_queue_pair_setup;
+	rte_cryptodev_queue_pair_start;
+	rte_cryptodev_queue_pair_stop;
+	rte_cryptodev_queue_pair_count;
+	rte_cryptodev_stats_get;
+	rte_cryptodev_stats_reset;
+	rte_cryptodev_info_get;
+	rte_cryptodev_callback_register;
+	rte_cryptodev_callback_unregister;
+	rte_cryptodev_enqueue_burst;
+	rte_cryptodev_dequeue_burst;
+	rte_cryptodev_create_crypto_op;
+	rte_cryptodev_crypto_op_free;
+	rte_cryptodev_session_create;
+	rte_cryptodev_session_free;
+
+	rte_cryptodev_pmd_get_dev;
+	rte_cryptodev_pmd_get_named_dev;
+	rte_cryptodev_pmd_is_valid_dev;
+	rte_cryptodev_pmd_allocate;
+	rte_cryptodev_pmd_virtual_dev_init;
+	rte_cryptodev_pmd_release_device;
+	rte_cryptodev_pmd_attach;
+	rte_cryptodev_pmd_detach;
+	rte_cryptodev_pmd_driver_register;
+	rte_cryptodev_pmd_socket_id;
+	rte_cryptodev_pmd_callback_process;
+
+	local: *;
+};
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
new file mode 100644
index 0000000..a1797ce
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -0,0 +1,966 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_errno.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "rte_crypto.h"
+#include "rte_cryptodev.h"
+
+
+
+/* Macros to check for invalid function pointers in dev_ops structure */
+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \
+	if ((func) == NULL) { \
+		CDEV_LOG_ERR("Function not supported"); \
+		return retval; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_RET() do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		CDEV_LOG_ERR("Cannot run in secondary processes"); \
+		return; \
+	} \
+} while (0)
+
+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) { \
+		CDEV_LOG_ERR("Cannot run in secondary processes"); \
+		return retval; \
+	} \
+} while (0)
+
+#define FUNC_PTR_OR_RET(func) do { \
+	if ((func) == NULL) { \
+		CDEV_LOG_ERR("Function not supported"); \
+		return; \
+	} \
+} while (0)
+
+struct rte_cryptodev rte_crypto_devices[RTE_MAX_CRYPTODEVS];
+
+static struct rte_cryptodev_global cryptodev_globals = {
+		.devs			= &rte_crypto_devices[0],
+		.data			= NULL,
+		.nb_devs		= 0,
+		.max_devs		= RTE_MAX_CRYPTODEVS
+};
+
+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;
+
+/* spinlock for crypto device callbacks */
+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_cryptodev_callback {
+	TAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */
+	rte_cryptodev_cb_fn cb_fn;                /**< Callback address */
+	void *cb_arg;                           /**< Parameter for callback */
+	enum rte_cryptodev_event_type event;          /**< Interrupt event type */
+	uint32_t active;                        /**< Callback is executing */
+};
+
+int
+rte_cryptodev_create_vdev(const char *name, const char *args)
+{
+	return rte_eal_vdev_init(name, args);
+}
+
+
+static inline void
+rte_cryptodev_data_alloc(int socket_id)
+{
+	const unsigned flags = 0;
+	const struct rte_memzone *mz;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve("rte_cryptodev_data",
+				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data),
+				socket_id, flags);
+	} else
+		mz = rte_memzone_lookup("rte_cryptodev_data");
+	if (mz == NULL)
+		rte_panic("Cannot allocate memzone for the crypto device data");
+
+	cryptodev_globals.data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(cryptodev_globals.data, 0,
+				cryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data));
+}
+
+
+static uint8_t
+rte_cryptodev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_MAX_CRYPTODEVS; dev_id++) {
+		if (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_MAX_CRYPTODEVS;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)
+{
+	uint8_t dev_id;
+	struct rte_cryptodev *cryptodev;
+
+	dev_id = rte_cryptodev_find_free_device_index();
+	if (dev_id == RTE_MAX_CRYPTODEVS) {
+		CDEV_LOG_ERR("Reached maximum number of crypto devices");
+		return NULL;
+	}
+
+	if (cryptodev_globals.data == NULL)
+		rte_cryptodev_data_alloc(socket_id);
+
+	if (rte_cryptodev_pmd_get_named_dev(name) != NULL) {
+		CDEV_LOG_ERR("Crypto device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	cryptodev = rte_cryptodev_pmd_get_dev(dev_id);
+	cryptodev->data = &cryptodev_globals.data[dev_id];
+	snprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s", name);
+	cryptodev->data->dev_id = dev_id;
+	cryptodev->attached = RTE_CRYPTODEV_ATTACHED;
+	cryptodev->pmd_type = type;
+	cryptodev_globals.nb_devs++;
+
+	return cryptodev;
+}
+
+static inline int
+rte_cryptodev_create_unique_device_name(char *name, size_t size,
+		struct rte_pci_device *pci_dev)
+{
+	int ret;
+
+	if ((name == NULL) || (pci_dev == NULL))
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%d:%d.%d",
+			pci_dev->addr.bus, pci_dev->addr.devid,
+			pci_dev->addr.function);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
+{
+	if (cryptodev == NULL)
+		return -EINVAL;
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+	return 0;
+}
+
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_cryptodev *cryptodev;
+
+	/* allocate device structure */
+	cryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);
+	if (cryptodev == NULL)
+		return NULL;
+
+	/* allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc("%s private structure",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE);
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	/* initialise user call-back tail queue */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	return cryptodev;
+}
+
+static int
+rte_cryptodev_init(struct rte_pci_driver *pci_drv,
+		struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	int retval;
+
+	cryptodrv = (struct rte_cryptodev_driver *)pci_drv;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Create unique Crypto device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV, rte_socket_id());
+	if (cryptodev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		cryptodev->data->dev_private =
+				rte_zmalloc_socket("cryptodev private structure",
+						cryptodrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE, rte_socket_id());
+
+		if (cryptodev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device data");
+	}
+
+	cryptodev->pci_dev = pci_dev;
+	cryptodev->driver = cryptodrv;
+
+	/* init user callbacks */
+	TAILQ_INIT(&(cryptodev->link_intr_cbs));
+
+	/* Invoke PMD device initialization function */
+	retval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);
+	if (retval == 0)
+		return 0;
+
+	CDEV_LOG_ERR("driver %s: crypto_dev_init(vendor_id=0x%u device_id=0x%x)"
+			" failed", pci_drv->name,
+			(unsigned) pci_dev->id.vendor_id,
+			(unsigned) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->attached = RTE_CRYPTODEV_DETACHED;
+	cryptodev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+static int
+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)
+{
+	const struct rte_cryptodev_driver *cryptodrv;
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Create unique device name using PCI address */
+	rte_cryptodev_create_unique_device_name(cryptodev_name,
+			sizeof(cryptodev_name), pci_dev);
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;
+	if (cryptodrv == NULL)
+			return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*cryptodrv->cryptodev_uninit) {
+		ret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);
+		if (ret)
+			return ret;
+	}
+
+	/* free ether device */
+	rte_cryptodev_pmd_release_device(cryptodev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(cryptodev->data->dev_private);
+
+	cryptodev->pci_dev = NULL;
+	cryptodev->driver = NULL;
+	cryptodev->data = NULL;
+
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,
+		enum pmd_type type)
+{
+	/* Call crypto device initialization directly if device is virtual */
+	if (type == PMD_VDEV)
+		return rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,
+				NULL);
+
+	/* Register PCI driver for physical device intialisation during
+	 * PCI probing */
+	cryptodrv->pci_drv.devinit = rte_cryptodev_init;
+	cryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;
+	rte_eal_pci_register(&cryptodrv->pci_drv);
+	return 0;
+}
+
+int
+rte_cryptodev_pmd_attach(const char *devargs __rte_unused,
+			uint8_t *dev_id __rte_unused)
+{
+	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
+	return -1;
+}
+
+/* detach the device, then store the name of the device */
+int
+rte_cryptodev_pmd_detach(uint8_t dev_id __rte_unused,
+			char *name __rte_unused)
+{
+	RTE_LOG(ERR, EAL, "Hotplug support isn't enabled");
+	return -1;
+}
+
+uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	dev = &rte_crypto_devices[dev_id];
+	return dev->data->nb_queue_pairs;
+}
+
+static int
+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs, int socket_id)
+{
+	struct rte_cryptodev_info dev_info;
+	uint16_t old_nb_queues = dev->data->nb_queue_pairs;
+	void **qp;
+	unsigned i;
+
+	if ((dev == NULL) || (nb_qpairs < 1)) {
+		CDEV_LOG_ERR("invalid param: dev %p, nb_queues %u",
+							dev, nb_qpairs);
+		return -EINVAL;
+	}
+
+	CDEV_LOG_DEBUG("Setup %d queues pairs on device %u",
+			nb_qpairs, dev->data->dev_id);
+
+
+	memset(&dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+
+	if (nb_qpairs > (dev_info.max_queue_pairs)) {
+		CDEV_LOG_ERR("Invalid num queue_pairs (%u) for dev %u",
+				nb_qpairs, dev->data->dev_id);
+	    return (-EINVAL);
+	}
+
+	if (dev->data->queue_pairs == NULL) { /* first time configuration */
+		dev->data->queue_pairs = rte_zmalloc_socket(
+				"cryptodev->queue_pairs",
+				sizeof(dev->data->queue_pairs[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE, socket_id);
+
+		if (dev->data->queue_pairs == NULL) {
+			dev->data->nb_queue_pairs = 0;
+			CDEV_LOG_ERR("failed to get memory for qp meta data, "
+							"nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+	} else { /* re-configure */
+		FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release, -ENOTSUP);
+
+		qp = dev->data->queue_pairs;
+
+		for (i = nb_qpairs; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_pair_release)(dev, i);
+		qp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,
+				RTE_CACHE_LINE_SIZE);
+		if (qp == NULL) {
+			CDEV_LOG_ERR("failed to realloc qp meta data,"
+						" nb_queues %u", nb_qpairs);
+			return -(ENOMEM);
+		}
+		if (nb_qpairs > old_nb_queues) {
+			uint16_t new_qs = nb_qpairs - old_nb_queues;
+
+			memset(qp + old_nb_queues, 0,
+				sizeof(qp[0]) * new_qs);
+		}
+
+		dev->data->queue_pairs = qp;
+
+	}
+	dev->data->nb_queue_pairs = nb_qpairs;
+	return 0;
+}
+
+int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_start(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);
+
+	return dev->dev_ops->queue_pair_stop(dev, queue_pair_id);
+
+}
+
+int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return (-EBUSY);
+	}
+
+	/* Setup new number of queue pairs and reconfigure device. */
+	diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
+			config->socket_id);
+	if (diag != 0) {
+		CDEV_LOG_ERR("dev%d rte_crypto_dev_queue_pairs_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup Session mempool for device */
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_mp_create, -ENOTSUP);
+	return (*dev->dev_ops->session_mp_create)(dev,
+				config->session_mp.nb_objs,
+				config->session_mp.cache_size,
+				config->socket_id);
+}
+
+static void
+rte_cryptodev_config_restore(uint8_t dev_id __rte_unused)
+{
+}
+
+int
+rte_cryptodev_start(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+	int diag;
+
+	CDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	rte_cryptodev_config_restore(dev_id);
+
+	return 0;
+}
+
+void
+rte_cryptodev_stop(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+void
+rte_cryptodev_close(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_RET();
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_close);
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_close)(dev);
+}
+
+int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct rte_cryptodev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	if (queue_pair_id >= dev->data->nb_queue_pairs) {
+		CDEV_LOG_ERR("Invalid queue_pair_id=%d", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	if (dev->data->dev_started) {
+		CDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
+
+	return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
+			socket_id);
+}
+
+
+int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return (-ENODEV);
+	}
+
+	if (stats == NULL) {
+		CDEV_LOG_ERR("Invalid stats ptr");
+		return -EINVAL;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	memset(stats, 0, sizeof(*stats));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
+	(*dev->dev_ops->stats_get)(dev, stats);
+	return 0;
+}
+
+void
+rte_cryptodev_stats_reset(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);
+	(*dev->dev_ops->stats_reset)(dev);
+}
+
+
+void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
+{
+	struct rte_cryptodev *dev;
+
+	if (dev_id >= cryptodev_globals.nb_devs) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+
+	memset(dev_info, 0, sizeof(struct rte_cryptodev_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.name;
+}
+
+
+int
+rte_cryptodev_callback_register(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *user_cb;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+			user_cb->cb_arg == cb_arg &&
+			user_cb->event == event) {
+			break;
+		}
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_cryptodev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ((user_cb == NULL) ? -ENOMEM : 0);
+}
+
+int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+			enum rte_cryptodev_event_type event,
+			rte_cryptodev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret;
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_callback *cb, *next;
+
+	if (!cb_fn)
+		return (-EINVAL);
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_crypto_devices[dev_id];
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+
+	ret = 0;
+	for (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb->cb_arg != (void *)-1 &&
+				cb->cb_arg != cb_arg))
+			continue;
+
+		/*
+		 * if this callback is not executing right now,
+		 * then remove it.
+		 */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);
+			rte_free(cb);
+		} else {
+			ret = -EAGAIN;
+		}
+	}
+
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+	return ret;
+}
+
+void
+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+	enum rte_cryptodev_event_type event)
+{
+	struct rte_cryptodev_callback *cb_lst;
+	struct rte_cryptodev_callback dev_cb;
+
+	rte_spinlock_lock(&rte_cryptodev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+						dev_cb.cb_arg);
+		rte_spinlock_lock(&rte_cryptodev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
+}
+
+
+struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return NULL;
+	}
+
+	 dev = &rte_crypto_devices[dev_id];
+
+	 return dev->dev_ops->session_create(dev, cipher_setup_data,
+			 hash_setup_data, op_chain);
+}
+
+void
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+		CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
+		return;
+	}
+
+	 dev = &rte_crypto_devices[dev_id];
+
+	 dev->dev_ops->session_destroy(dev, session);
+}
+
+
+static void
+rte_crypto_op_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *_op_data,
+		__rte_unused unsigned i)
+{
+	struct rte_crypto_op_data *op_data = _op_data;
+
+	memset(op_data, 0, mp->elt_size);
+
+	op_data->pool = mp;
+}
+
+static void
+rte_crypto_op_pool_init(__rte_unused struct rte_mempool *mp,
+		__rte_unused void *opaque_arg)
+{
+}
+
+struct rte_mempool *
+rte_crypto_op_pool_create(const char *name, unsigned n, unsigned cache_size,
+		int socket_id)
+{
+	/* lookup mempool in case already allocated */
+	struct rte_mempool *mp = rte_mempool_lookup(name);
+	if (mp != NULL) {
+		if (mp->elt_size != sizeof(struct rte_crypto_op_data) ||
+				mp->cache_size < cache_size ||
+				mp->size < n) {
+			mp = NULL;
+			CDEV_LOG_ERR("%s mempool already exists with "
+					"incompatible initialisation parameters",
+					name);
+			return NULL;
+		}
+		CDEV_LOG_DEBUG("%s mempool already exists, reusing!", name);
+		return mp;
+	}
+
+	mp = rte_mempool_create(name,	/* mempool name */
+			n,			/* number of elements*/
+			sizeof(struct rte_crypto_op_data),/* element size*/
+			cache_size,			/* Cache size*/
+			0,				/* private data size */
+			rte_crypto_op_pool_init,	/* pool initialisation constructor */
+			NULL,				/* pool initialisation constructor argument */
+			rte_crypto_op_init,		/* obj constructor */
+			NULL,				/* obj constructor argument */
+			socket_id,			/* socket id */
+			0);				/* flags */
+
+	if (mp == NULL) {
+		CDEV_LOG_ERR("failed to allocate %s mempool", name);
+		return NULL;
+	}
+
+
+	CDEV_LOG_DEBUG("%s mempool created!", name);
+	return mp;
+}
+
+
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
new file mode 100644
index 0000000..d7694ad
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -0,0 +1,550 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_H_
+#define _RTE_CRYPTODEV_H_
+
+/**
+ * @file rte_cryptodev.h
+ *
+ * RTE Cryptographic Device APIs
+ *
+ * Defines RTE Crypto Device APIs for the provisioning of cipher and
+ * authentication operations.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "stddef.h"
+
+#include "rte_crypto.h"
+#include "rte_cryptodev_pmd.h"
+
+/**  Crypto device information */
+struct rte_cryptodev_info {
+	const char *driver_name;		/**< Driver name. */
+	enum rte_cryptodev_type dev_type;	/**< Device type */
+	struct rte_pci_device *pci_dev;		/**< PCI information. */
+	uint16_t max_queue_pairs;		/**< Maximum number of queues
+						* pairs supported by device.
+						*/
+};
+
+/** Definitions of Crypto device event types */
+enum rte_cryptodev_event_type {
+	RTE_CRYPTODEV_EVENT_UNKNOWN,	/**< unknown event type */
+	RTE_CRYPTODEV_EVENT_ERROR,	/**< error interrupt event */
+	RTE_CRYPTODEV_EVENT_MAX		/**< max value of this enum */
+};
+
+/** Crypto device queue pair configuration structure. */
+struct rte_cryptodev_qp_conf {
+	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
+};
+
+/**
+ * Typedef for application callback function to be registered by application
+ * software for notification of device events
+ *
+ * @param	dev_id	Crypto device identifier
+ * @param	event	Crypto device event to register for notification of.
+ * @param	cb_arg	User specified parameter to be passed as to passed to
+ *			users callback function.
+ */
+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,
+		enum rte_cryptodev_event_type event, void *cb_arg);
+
+#ifdef RTE_CRYPTODEV_PERF
+/**
+ * Crypto Device performance counter statistics structure. This structure is
+ * used for RDTSC counters for measuring crypto operations.
+ */
+struct rte_cryptodev_perf_stats {
+	uint64_t t_accumlated;		/**< Accumulated time processing operation */
+	uint64_t t_min;			/**< Max time */
+	uint64_t t_max;			/**< Min time */
+};
+#endif
+
+/** Crypto Device statistics */
+struct rte_cryptodev_stats {
+	uint64_t enqueued_count;	/**< Count of all operations enqueued */
+	uint64_t dequeued_count;	/**< Count of all operations dequeued */
+
+	uint64_t enqueue_err_count;	/**< Total error count on operations enqueued */
+	uint64_t dequeue_err_count;	/**< Total error count on operations dequeued */
+
+#ifdef RTE_CRYPTODEV_DETAILED_STATS
+	struct {
+		uint64_t encrypt_ops;	/**< Count of encrypt operations */
+		uint64_t encrypt_bytes;	/**< Number of bytes encrypted */
+
+		uint64_t decrypt_ops;	/**< Count of decrypt operations */
+		uint64_t decrypt_bytes;	/**< Number of bytes decrypted */
+	} cipher; /**< Cipher operations stats */
+
+	struct {
+		uint64_t generate_ops;	/**< Count of generate operations */
+		uint64_t bytes_hashed;	/**< Number of bytes hashed */
+
+		uint64_t verify_ops;	/**< Count of verify operations */
+		uint64_t bytes_verified;/**< Number of bytes verified */
+	} hash;	 /**< Hash operations stats */
+#endif
+
+#ifdef RTE_CRYPTODEV_PERF
+	struct rte_cryptodev_perf_stats op_perf;	/**< Operations stats */
+#endif
+} __rte_cache_aligned;
+
+/**
+ * Create a virtual crypto device
+ *
+ * @param	name	Cryptodev PMD name of device to be created.
+ * @param	args	Options arguments for device.
+ *
+ * @return
+ * - On successful creation of the cryptodev the device index is returned,
+ *   which will be between 0 and rte_cryptodev_count().
+ * - In the case of a failure, returns -1.
+ */
+extern int
+rte_cryptodev_create_vdev(const char *name, const char *args);
+
+/**
+ * Get the device identifier for the named crypto device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - Returns crypto device identifier on success.
+ *   - Return -1 on failure to find named crypto device.
+ */
+static inline int
+rte_cryptodev_get_dev_id(const char *name) {
+	unsigned i;
+
+	if (name == NULL)
+		return -1;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0 &&
+				rte_cryptodev_globals->devs[i].attached ==
+						RTE_CRYPTODEV_ATTACHED)
+			return i;
+
+	return -1;
+}
+
+/**
+ * Get the total number of crypto devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   - The total number of usable crypto devices.
+ */
+static inline uint8_t
+rte_cryptodev_count(void)
+{
+	return rte_cryptodev_globals->nb_devs;
+}
+
+static inline uint8_t
+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+{
+	uint8_t i, dev_count = 0;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
+		if (rte_cryptodev_globals->devs[i].dev_type == type &&
+			rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED)
+			dev_count++;
+
+	return dev_count;
+}
+
+/*
+ * Return the NUMA socket to which a device is connected
+ *
+ * @param dev_id
+ *   The identifier of the device
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -1 if returned is the dev_id value is out of range.
+ */
+static inline int
+rte_cryptodev_socket_id(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev;
+
+	if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+		return -1;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+	if (dev->pci_dev)
+		return dev->pci_dev->numa_node;
+	else
+		return 0;
+}
+
+/**
+ * Configure a device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * @param	dev_id		The identifier of the device to configure.
+ * @param	nb_qp_queue	The number of queue pairs to set up for the device.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+extern int
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+
+/**
+ * Start an device.
+ *
+ * The device start step is the last one and consists of setting the configured
+ * offload features and in starting the transmit and the receive units of the
+ * device.
+ * On success, all basic functions exported by the API (link status,
+ * receive/transmit, and so on) can be invoked.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+extern int
+rte_cryptodev_start(uint8_t dev_id);
+
+/**
+ * Stop an device. The device can be restarted with a call to
+ * rte_cryptodev_start()
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stop(uint8_t dev_id);
+
+/**
+ * Close an device. The device cannot be restarted!
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_close(uint8_t dev_id);
+
+/**
+ * Allocate and set up a receive queue pair for a device.
+ *
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_pair_id	The index of the queue pairs to set up. The
+ *				value must be in the range [0, nb_queue_pair
+ *				- 1] previously supplied to
+ *				rte_cryptodev_configure().
+ * @param	qp_conf		The pointer to the configuration data to be
+ *				used for the queue pair. NULL value is
+ *				allowed, in which case default configuration
+ *				will be used.
+ * @param	socket_id	The *socket_id* argument is the socket
+ *				identifier in case of NUMA. The value can be
+ *				*SOCKET_ID_ANY* if there is no NUMA constraint
+ *				for the DMA memory allocated for the receive
+ *				queue pair.
+ *
+ * @return
+ *   - 0: Success, queue pair correctly set up.
+ *   - <0: Queue pair configuration failed
+ */
+extern int
+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+/**
+ * Start a specified queue pair of a device. It is used
+ * when deferred_start flag of the specified queue is true.
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to start. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_crypto_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Stop specified queue pair of a device
+ *
+ * @param	dev_id		The identifier of the device
+ * @param	queue_pair_id	The index of the queue pair to stop. The value
+ *				must be in the range [0, nb_queue_pair - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The dev_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int
+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);
+
+/**
+ * Get the number of queue pairs on a specific crypto device
+ *
+ * @param	dev_id		Crypto device identifier.
+ * @return
+ *   - The number of configured queue pairs.
+ */
+extern uint16_t
+rte_cryptodev_queue_pair_count(uint8_t dev_id);
+
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	stats		A pointer to a structure of type
+ *				*rte_cryptodev_stats* to be filled with the
+ *				values of device counters.
+ * @return
+ *   - Zero if successful.
+ *   - Non-zero otherwise.
+ */
+extern int
+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);
+
+/**
+ * Reset the general I/O statistics of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ */
+extern void
+rte_cryptodev_stats_reset(uint8_t dev_id);
+
+/**
+ * Retrieve the contextual information of a device.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	dev_info	A pointer to a structure of type
+ *				*rte_cryptodev_info* to be filled with the
+ *				contextual information of the device.
+ */
+extern void
+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);
+
+
+/**
+ * Register a callback function for specific device id.
+ *
+ * @param	dev_id		Device id.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_register(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	event		Event interested.
+ * @param	cb_fn		User supplied callback function to be called.
+ * @param	cb_arg		Pointer to the parameters for the registered callback.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+extern int
+rte_cryptodev_callback_unregister(uint8_t dev_id,
+		enum rte_cryptodev_event_type event,
+		rte_cryptodev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ *
+ * Dequeue a burst of processed packets from a queue of the crypto device.
+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are
+ * supplied in the *pkts* array.
+ *
+ * The rte_crypto_dequeue_burst() function returns the number of packets
+ * actually dequeued, which is the number of *rte_mbuf* data structures
+ * effectively supplied into the *pkts* array.
+ *
+ * A return value equal to *nb_pkts* indicates that the queue contained
+ * at least *rx_pkts* packets, and this is likely to signify that other
+ * received packets remain in the input queue. Applications implementing
+ * a "retrieve as much received packets as possible" policy can check this
+ * specific case and keep invoking the rte_crypto_dequeue_burst() function until
+ * a value less than *nb_pkts* is returned.
+ *
+ * The rte_crypto_dequeue_burst() function does not provide any error
+ * notification to avoid the corresponding overhead.
+ *
+ * @param	dev_id		The identifier of the device.
+ * @param	qp_id		The index of the queue pair from which to
+ *				retrieve processed packets. The value must be
+ *				in the range [0, nb_queue_pair - 1] previously
+ *				supplied to rte_cryptodev_configure().
+ * @param	pkts		The address of an array of pointers to
+ *				*rte_mbuf* structures that must be large enough
+ *				to store *nb_pkts* pointers in it.
+ * @param	nb_pkts		The maximum number of packets to dequeue.
+ *
+ * @return
+ *   - The number of packets actually dequeued, which is the number
+ *   of pointers to *rte_mbuf* structures effectively supplied to the
+ *   *pkts* array.
+ */
+static inline uint16_t
+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_crypto_devices[dev_id];
+
+	nb_pkts = (*dev->dequeue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+
+	return nb_pkts;
+}
+
+/**
+ * Enqueue a burst of packets for processing on a crypto device.
+ *
+ * The rte_crypto_enqueue_burst() function is invoked to place packets
+ * on the queue *queue_id* of the device designated by its *dev_id*.
+ *
+ * The *nb_pkts* parameter is the number of packets to process which are
+ * supplied in the *pkts* array of *rte_mbuf* structures.
+ *
+ * The rte_crypto_enqueue_burst() function returns the number of packets it
+ * actually sent. A return value equal to *nb_pkts* means that all packets
+ * have been sent.
+ * *
+ * @param	dev_id		The identifier of the device.
+ * @param	queue_id	The index of the transmit queue through
+ *				which output packets must be sent. The value
+ *				must be in the range [0, nb_queue_pairs - 1]
+ *				previously supplied to rte_cryptodev_configure().
+ * @param	tx_pkts		The address of an array of *nb_pkts* pointers
+ *				to *rte_mbuf* structures which contain the
+ *				output packets.
+ * @param	nb_pkts		The number of packets to transmit.
+ *
+ * @return
+ * The number of packets actually enqueued on the crypto device. The return
+ * value can be less than the value of the *nb_pkts* parameter when the
+ * crypto devices queue is full or has been filled up.
+ */
+static inline uint16_t
+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
+		struct rte_mbuf **pkts, uint16_t nb_pkts)
+{
+	struct rte_cryptodev *dev = &rte_crypto_devices[dev_id];
+
+	return (*dev->enqueue_burst)
+			(dev->data->queue_pairs[qp_id], pkts, nb_pkts);
+}
+
+
+
+/**
+ * Initialise a session for symmetric cryptographic operations.
+ *
+ * This function is used by the client to initialize immutable
+ * parameters of symmetric cryptographic operation.
+ * To perform the operation the rte_cryptodev_enqueue_burst function is
+ * used.  Each mbuf should contain a reference to the session
+ * pointer returned from this function.
+ * Memory to contain the session information is allocated by the
+ * implementation.
+ * An upper limit on the number of session that many be created is
+ * defined by a build configuration constant.
+ * The rte_cryptodev_session_free must be called to free allocated
+ * memory when the session information is no longer needed.
+ *
+ * @param	dev_id			The device identifier.
+ * @param	cipher_setup_data	The parameters associated with the
+ *					cipher operation. This may be NULL.
+ * @param	hash_setup_data		The parameters associated with the hash
+ *					operation. This may be NULL.
+ * @param	op_chain		Specifies the crypto operation chaining,
+ *					cipher and/or hash and the order in
+ *					which they are performed.
+ *
+ * @return
+ *  Pointer to the created session or NULL
+ */
+extern struct rte_cryptodev_session *
+rte_cryptodev_session_create(uint8_t dev_id,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain);
+
+
+/**
+ * Free the memory associated with a previously allocated session.
+ *
+ * @param	dev_id		The device identifier.
+ * @param	session		Session pointer previously allocated by
+ *				*rte_cryptodev_session_create*.
+ *
+ * @return
+ *   0 on success.
+ */
+extern void
+rte_cryptodev_session_free(uint8_t dev_id,
+		struct rte_cryptodev_session *session);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_H_ */
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
new file mode 100644
index 0000000..e6fdd1c
--- /dev/null
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -0,0 +1,622 @@
+/*-
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTODEV_PMD_H_
+#define _RTE_CRYPTODEV_PMD_H_
+
+/** @file
+ * RTE Crypto PMD APIs
+ *
+ * @note
+ * These API are from crypto PMD only and user applications should not call them
+ * directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_log.h>
+
+#include "rte_crypto.h"
+
+struct rte_cryptodev_stats;
+struct rte_cryptodev_info;
+struct rte_cryptodev_qp_conf;
+
+enum rte_cryptodev_event_type;
+
+#define RTE_CRYPTODEV_NAME_MAX_LEN	(64)
+/**< Max length of name of crypto PMD */
+
+
+/* Logging Macros */
+
+#define CDEV_LOG_ERR(fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "%s() line %u: " fmt "\n", \
+			__func__, __LINE__, ## args); \
+	} while (0)
+
+#define CDEV_PMD_LOG_ERR(dev, fmt, args...) do { \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			dev, __func__, __LINE__, ## args); \
+	} while (0)
+
+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG
+#define CDEV_LOG_DEBUG(fmt, args...) do {                        \
+		RTE_LOG(DEBUG, CRYPTODEV, "%s() line %u: " fmt "\n", \
+				__func__, __LINE__, ## args); \
+	} while (0)
+
+#define CDEV_PMD_TRACE(fmt, args...) do {                        \
+		RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s: " fmt "\n", dev, __func__, ## args); \
+	} while (0)
+
+#else
+#define CDEV_LOG_DEBUG(fmt, args...)
+#define CDEV_PMD_TRACE(fmt, args...)
+#endif
+
+#define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
+/**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
+/**< Intel QAT PMD device name */
+
+/** Crypto device type */
+enum rte_cryptodev_type {
+	RTE_CRYPTODEV_AESNI_MB_PMD = 1,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
+};
+
+#define RTE_CRYPTODEV_DETACHED  (0)
+#define RTE_CRYPTODEV_ATTACHED  (1)
+
+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Dequeue processed packets from queue pair of a device. */
+
+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,
+		uint16_t nb_pkts);
+/**< Enqueue packets for processing on queue pair of a device. */
+
+
+struct rte_cryptodev_callback;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+
+
+/**
+ *
+ * The data part, with no function pointers, associated with each crypto device.
+ *
+ * This structure is safe to place in shared memory to be common among different
+ * processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	uint8_t dev_id;				/**< Device ID for this instance */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];	/**< Unique identifier name */
+	uint8_t dev_started : 1;		/**< Device state: STARTED(1)/STOPPED(0) */
+
+	void **queue_pairs;		/**< Array of pointers to queue pairs. */
+	uint16_t nb_queue_pairs;	/**< Number of device queue pairs. */
+
+	void *dev_private;		/**< PMD-specific private data */
+};
+
+
+struct rte_cryptodev_driver;
+struct rte_cryptodev;
+
+/**
+ * Initialisation function of a crypto driver invoked for each matching
+ * crypto PCI device detected during the PCI probing phase.
+ *
+ * @param	drv	The pointer to the [matching] crypto driver structure
+ *			supplied by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,
+		struct rte_cryptodev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param	drv	The pointer to the [matching] driver structure supplied
+ * 			by the PMD when it registered itself.
+ * @param	dev	The dev pointer is the address of the *rte_cryptodev*
+ *			structure associated with the matching device and which
+ *			has been [automatically] allocated in the
+ *			*rte_crypto_devices* array.
+ *
+ *  * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,
+				struct rte_cryptodev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *crypto_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *cryptodev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_cryptodev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned dev_private_size;	/**< Size of device private data. */
+
+	cryptodev_init_t cryptodev_init;	/**< Device init function. */
+	cryptodev_uninit_t cryptodev_uninit;	/**< Device uninit function. */
+};
+
+/** The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	dequeue_pkt_burst_t dequeue_burst;	/**< Pointer to PMD receive function. */
+	enqueue_pkt_burst_t enqueue_burst;	/**< Pointer to PMD transmit function. */
+
+	const struct rte_cryptodev_driver *driver;	/**< Driver for this device */
+	struct rte_cryptodev_data *data;		/**< Pointer to device data */
+	struct rte_cryptodev_ops *dev_ops;		/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;			/**< PCI info. supplied by probing */
+
+	enum rte_cryptodev_type dev_type;		/**< Crypto device type */
+	enum pmd_type pmd_type;				/**< PMD type - PDEV / VDEV */
+
+	struct rte_cryptodev_cb_list link_intr_cbs;
+	/**< User application callback for interrupts if present */
+
+	uint8_t attached : 1;	/**< Flag indicating the device is attached */
+};
+
+/** Crypto device configuration structure */
+struct rte_cryptodev_config {
+	int socket_id;			/**< Socket to allocate resources on */
+	uint16_t nb_queue_pairs;	/**< Number of queue pairs to configure
+					* on device */
+
+	struct {
+		uint32_t nb_objs;	/**< Number of objects in mempool */
+		uint32_t cache_size;	/**< l-core object cache size */
+	} session_mp;		/**< Session mempool configuration */
+};
+
+/** Global structure used for maintaining state of allocated crypto devices */
+struct rte_cryptodev_global {
+	struct rte_cryptodev *devs;		/**< Device information array */
+	struct rte_cryptodev_data *data;	/**< Device private data */
+	uint8_t nb_devs;			/**< Number of devices found */
+	uint8_t max_devs;			/**< Max number of devices */
+};
+
+/** pointer to global crypto devices data structure. */
+extern struct rte_cryptodev_global *rte_cryptodev_globals;
+
+/**
+ * Get the rte_cryptodev structure device pointer for the device. Assumes a
+ * valid device index.
+ *
+ * @param	dev_id	Device ID value to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_dev(uint8_t dev_id)
+{
+	return &rte_cryptodev_globals->devs[dev_id];
+}
+
+/**
+ * Get the rte_cryptodev structure device pointer for the named device.
+ *
+ * @param	name	device name to select the device structure.
+ *
+ * @return
+ *   - The rte_cryptodev structure pointer for the given device ID.
+ */
+static inline struct rte_cryptodev *
+rte_cryptodev_pmd_get_named_dev(const char *name)
+{
+	unsigned i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0; i < rte_cryptodev_globals->max_devs; i++) {
+		if (rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED &&
+				strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0)
+			return &rte_cryptodev_globals->devs[i];
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the crypto device index is valid attached crypto device.
+ *
+ * @param	dev_id	Crypto device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_cryptodev *dev = NULL;
+
+	if (dev_id >= rte_cryptodev_globals->nb_devs)
+		return 0;
+
+	dev = rte_cryptodev_pmd_get_dev(dev_id);
+	if (dev->attached != RTE_CRYPTODEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * The pool of rte_cryptodev structures. The size of the pool
+ * is configured at compile-time in the <rte_cryptodev.c> file.
+ */
+extern struct rte_cryptodev rte_crypto_devices[];
+
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *crypto_dev_ops* supplied in the
+ * *rte_cryptodev* structure associated with a device.
+ */
+
+/**
+ *	Function used to configure device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to start a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns 0 on success
+ */
+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);
+
+/**
+ * Function used to stop a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);
+
+/**
+ Function used to close a configured device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_close_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	stats	Pointer to crypto device stats structure to populate
+ */
+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_stats *stats);
+
+
+/**
+ * Function used to reset statistics of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);
+
+
+/**
+ * Function used to get specific information of a device.
+ *
+ * @param	dev	Crypto device pointer
+ */
+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,
+				struct rte_cryptodev_info *dev_info);
+
+/**
+ * Start queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Stop queue pair of a device.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
+				uint16_t qp_id);
+
+/**
+ * Setup a queue pair for a device.
+ *
+ * @param	dev		Crypto device pointer
+ * @param	qp_id		Queue Pair Index
+ * @param	qp_conf		Queue configuration structure
+ * @param	socket_id	Socket Index
+ *
+ * @return	Returns 0 on success.
+ */
+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id,	const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id);
+
+/**
+ * Release memory resources allocated by given queue pair.
+ *
+ * @param	dev	Crypto device pointer
+ * @param	qp_id	Queue Pair Index
+ */
+typedef void (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,
+		uint16_t qp_id);
+
+/**
+ * Get number of available queue pairs of a device.
+ *
+ * @param	dev	Crypto device pointer
+ *
+ * @return	Returns number of queue pairs on success.
+ */
+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);
+
+/**
+ * Create a session mempool to allocate sessions from
+ *
+ * @param	dev		Crypto device pointer
+ * @param	nb_objs		number of sessions objects in mempool
+ * @param	obj_cache	l-core object cache size, see *rte_ring_create*
+ * @param	socket_id	Socket Id to allocate  mempool on.
+ *
+ * @return
+ * - On success returns a pointer to a rte_mempool
+ * - On failure returns a NULL pointer
+ *  */
+typedef int (*cryptodev_create_session_pool_t)(
+		struct rte_cryptodev *dev, unsigned nb_objs,
+		unsigned obj_cache_size, int socket_id);
+
+/**
+ * Create a Crypto session on a device.
+ *
+ * @param	dev			Crypto device pointer
+ * @param	cipher_setup_data	Cipher operation parameters
+ * @param	hash_setup_data		Hash operation parameters
+ * @param	op_chain		Operation chaining
+ *
+ * @return
+ *  - Returns cryptodev session structure on success.
+ *  - Returns NULL on failure.
+ * */
+typedef struct rte_cryptodev_session * (*cryptodev_create_session_t)(
+		struct rte_cryptodev *dev,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_chain);
+
+/**
+ * Free Crypto session.
+ * @param	session		Cryptodev session structure to free
+ * */
+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,
+		struct rte_cryptodev_session *session);
+
+
+/** Crypto device operations function pointer table */
+struct rte_cryptodev_ops {
+	cryptodev_configure_t dev_configure;	/**< Configure device. */
+	cryptodev_start_t dev_start;		/**< Start device. */
+	cryptodev_stop_t dev_stop;		/**< Stop device. */
+	cryptodev_close_t dev_close;		/**< Close device. */
+
+	cryptodev_info_get_t dev_infos_get;	/**< Get device info. */
+
+	cryptodev_stats_get_t stats_get;	/**< Get generic device statistics. */
+	cryptodev_stats_reset_t stats_reset;	/**< Reset generic device statistics. */
+
+	cryptodev_queue_pair_setup_t queue_pair_setup;		/**< Set up a device queue pair. */
+	cryptodev_queue_pair_release_t queue_pair_release;	/**< Release a queue pair. */
+	cryptodev_queue_pair_start_t queue_pair_start;		/**< Start a queue pair. */
+	cryptodev_queue_pair_stop_t queue_pair_stop;		/**< Stop a queue pair. */
+	cryptodev_queue_pair_count_t queue_pair_count;		/**< Get count of the queue pairs. */
+
+	cryptodev_create_session_pool_t session_mp_create;	/**< Create a session mempool to allocate sessions from */
+	cryptodev_create_session_t session_create;		/**< Create a Crypto session. */
+	cryptodev_free_session_t session_destroy;		/**< Destroy a Crypto session. */
+};
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Allocates a new cryptodev slot for an crypto device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param	name		Unique identifier name for each device
+ * @param	type		Device type of this Crypto device
+ * @param	socket_id	Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int  socket_id);
+
+/**
+ * Creates a new virtual crypto device and returns the pointer
+ * to that device.
+ *
+ * @param	name			PMD type name
+ * @param	dev_private_size	Size of crypto PMDs private data
+ * @param	socket_id		Socket to allocate resources on.
+ *
+ * @return
+ *   - Cryptodev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_cryptodev *
+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Function for internal use by dummy drivers primarily, e.g. ring-based
+ * driver.
+ * Release the specified cryptodev device.
+ *
+ * @param cryptodev
+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+extern int
+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
+
+/**
+ * Attach a new device specified by arguments.
+ *
+ * @param devargs
+ *  A pointer to a string array describing the new device
+ *  to be attached. The string should be a pci address like
+ *  '0000:01:00.0' or virtual device name like 'crypto_pcap0'.
+ * @param dev_id
+ *  A pointer to a identifier actually attached.
+ * @return
+ *  0 on success and dev_id is filled, negative on error
+ */
+extern int
+rte_cryptodev_pmd_attach(const char *devargs, uint8_t *dev_id);
+
+/**
+ * Detach a device specified by identifier.
+ *
+ * @param dev_id
+ *   The identifier of the device to detach.
+ * @param addr
+ *  A pointer to a device name actually detached.
+ * @return
+ *  0 on success and devname is filled, negative on error
+ */
+extern int
+rte_cryptodev_pmd_detach(uint8_t dev_id, char *devname);
+
+/**
+ * Register a Crypto [Poll Mode] driver.
+ *
+ * Function invoked by the initialization function of a Crypto driver
+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:
+ *
+ *	a - register itself as PCI driver if the crypto device is a physical
+ *		device, by invoking the rte_eal_pci_register() function to
+ *		register the *pci_drv* structure embedded in the *crypto_drv*
+ *		structure, after having stored the address of the
+ *		rte_cryptodev_init() function in the *devinit* field of the
+ *		*pci_drv* structure.
+ *
+ *		During the PCI probing phase, the rte_cryptodev_init()
+ *		function is invoked for each PCI [device] matching the
+ *		embedded PCI identifiers provided by the driver.
+ *
+ *	b, complete the initialization sequence if the device is a virtual
+ *		device by calling the rte_cryptodev_init() directly passing a
+ *		NULL parameter for the rte_pci_device structure.
+ *
+ *   @param crypto_drv	crypto_driver structure associated with the crypto
+ *					driver.
+ *   @param type		pmd type
+ */
+extern int
+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,
+		enum pmd_type type);
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device.
+ *  *
+ * @param	dev	Pointer to cryptodev struct
+ * @param	event	Crypto device interrupt event type.
+ *
+ * @return
+ *  void
+ */
+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
+				enum rte_cryptodev_event_type event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_CRYPTODEV_PMD_H_ */
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index ede0dca..2e47e7f 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index 1bed415..c8e1d8a 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -76,9 +76,19 @@ enum rte_page_sizes {
 /**< Return the first cache-aligned value greater or equal to size. */
 
 /**
+ * Force alignment.
+ */
+#define __rte_align(a) __attribute__((__aligned__(a)))
+
+/**
  * Force alignment to cache line.
  */
-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))
+#define __rte_cache_aligned __rte_align(RTE_CACHE_LINE_SIZE)
+
+/**
+ * Force a structure to be packed
+ */
+#define __rte_packed __attribute__((__packed__))
 
 typedef uint64_t phys_addr_t; /**< Physical address definition. */
 #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)
@@ -104,7 +114,7 @@ struct rte_memseg {
 	 /**< store segment MFNs */
 	uint64_t mfn[DOM0_NUM_MEMBLOCK];
 #endif
-} __attribute__((__packed__));
+} __rte_packed;
 
 /**
  * Lock page in physical memory and prevent from swapping.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index e416312..a8e9dbc 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -281,6 +281,7 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
+	case PKT_TX_CRYPTO_OP: return "PKT_TX_CRYPTO_OP";
 	case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
 	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
 	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8c2db1b..0468b7c 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -108,10 +108,12 @@ extern "C" {
 #define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */
 #define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
 #define PKT_RX_QINQ_PKT      (1ULL << 15)  /**< RX packet with double VLAN stripped. */
+#define PKT_RX_CRYPTO_DIGEST_BAD (1ULL << 16) /**< Crypto hash digest verification failed. */
 /* add new RX flags here */
 
 /* add new TX flags here */
 
+#define PKT_TX_CRYPTO_OP	(1ULL << 48) /**< Valid Crypto Operation attached to mbuf */
 /**
  * Second VLAN insertion (QinQ) flag.
  */
@@ -740,6 +742,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+/** Opaque accelerator operations declarations */
+struct rte_crypto_op_data;
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -867,6 +872,8 @@ struct rte_mbuf {
 
 	/** Timesync flags for use with IEEE1588. */
 	uint16_t timesync;
+	/* Crypto Accelerator operation */
+	struct rte_crypto_op_data *crypto_op;
 } __rte_cache_aligned;
 
 static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
@@ -1648,6 +1655,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)
 
 /**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))
+
+/**
+ * A macro that returns the physical address of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param o
+ *   The offset into the data to calculate address from.
+ */
+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)
+/**
  * A macro that returns the length of the packet.
  *
  * The value can be read or assigned.
@@ -1816,6 +1850,23 @@ static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
  */
 void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
 
+
+
+/**
+ * Attach a crypto operation to a mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param op
+ *   The crypto operation data structure to attach.
+ */ 
+static inline void
+rte_pktmbuf_attach_crypto_op(struct rte_mbuf *m, struct rte_crypto_op_data *op)
+{
+	m->crypto_op = op;
+	m->ol_flags |= PKT_TX_CRYPTO_OP;
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 3871205..c7ee033 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lcryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MALLOC)         += -lrte_malloc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
-- 
1.9.3

  reply	other threads:[~2015-08-20 13:59 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
2015-08-20 14:07 ` Declan Doherty [this message]
2015-08-20 19:07   ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Neil Horman
2015-08-21 14:02     ` Declan Doherty
2015-09-15 16:36     ` [dpdk-dev] [PATCH] cryptodev: changes to crypto operation APIs to support non prescriptive chaining of crypto transforms in a crypto operation. app/test: updates to cryptodev unit tests to support new xform chaining APIs. aesni_mb_pmd: updates to device to support API changes Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 2/4] qat_crypto_pmd: Addition of a new QAT DPDK PMD Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 3/4] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 4/4] app/test: add cryptodev unit and performance tests Declan Doherty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1440079643-5437-2-git-send-email-declan.doherty@intel.com \
    --to=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).