DPDK patches and discussions
 help / color / mirror / Atom feed
From: Declan Doherty <declan.doherty@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 2/4] qat_crypto_pmd: Addition of a new QAT DPDK PMD.
Date: Thu, 20 Aug 2015 15:07:21 +0100	[thread overview]
Message-ID: <1440079643-5437-3-git-send-email-declan.doherty@intel.com> (raw)
In-Reply-To: <1440079643-5437-1-git-send-email-declan.doherty@intel.com>

From: John Griffin <john.griffin@intel.com>

Co-authored-by: Des O Dea <des.j.o.dea@intel.com>
Co-authored-by: Fiona Trahe <fiona.trahe@intel.com>

This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.
This PMD will adhere to the cryptodev API (contained in a previous patch).
This patch depends on a QAT PF driver which may be downloaded from
01.org (please see the file qat_pf_driver_install.txt contained in
this patch).

This is a limited patchset which has support for a chain of cipher and
hash  the following algorithms are supported:
Cipher algorithms:
 -   RTE_CRYPTO_SYM_CIPHER_AES128_CBC
 -   RTE_CRYPTO_SYM_CIPHER_AES256_CBC
 -   RTE_CRYPTO_SYM_CIPHER_AES512_CBC
Hash algorithms:
 -   RTE_CRYPTO_SYM_HASH_SHA1_HMAC
 -   RTE_CRYPTO_SYM_HASH_SHA256_HMAC
 -   RTE_CRYPTO_SYM_HASH_SHA512_HMAC

Some limitation on this patchset which shall be contributed in a
subsequent release:
 -   Chained mbufs are not supported.
 -   Hash only is not supported.
 -   Cipher only is not supported.
 -   Only in-place is currently supported (destination address is the
     same as source address).
 -   Only supports session-oriented API implementation (session-less
     APIs are not supported).
 -   Not performance tuned.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_bsdapp                               |  13 +
 config/common_linuxapp                             |  15 +-
 doc/guides/cryptodevs/index.rst                    |  42 ++
 doc/guides/cryptodevs/qat.rst                      | 155 +++++++
 doc/guides/index.rst                               |   1 +
 drivers/Makefile                                   |   1 +
 drivers/crypto/Makefile                            |  38 ++
 drivers/crypto/qat/Makefile                        |  63 +++
 .../qat/qat_adf/adf_transport_access_macros.h      | 173 ++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw.h            | 316 ++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h         | 404 ++++++++++++++++++
 drivers/crypto/qat/qat_adf/icp_qat_hw.h            | 305 ++++++++++++++
 drivers/crypto/qat/qat_adf/qat_algs.h              | 124 ++++++
 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c   | 462 ++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.c                    | 469 +++++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h                    |  99 +++++
 drivers/crypto/qat/qat_logs.h                      |  78 ++++
 drivers/crypto/qat/qat_qp.c                        | 372 ++++++++++++++++
 drivers/crypto/qat/rte_pmd_qat_version.map         |   5 +
 drivers/crypto/qat/rte_qat_cryptodev.c             | 128 ++++++
 mk/rte.app.mk                                      |   3 +
 21 files changed, 3265 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/cryptodevs/index.rst
 create mode 100644 doc/guides/cryptodevs/qat.rst
 create mode 100644 drivers/crypto/Makefile
 create mode 100644 drivers/crypto/qat/Makefile
 create mode 100644 drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
 create mode 100644 drivers/crypto/qat/qat_adf/icp_qat_hw.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs.h
 create mode 100644 drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 create mode 100644 drivers/crypto/qat/qat_logs.h
 create mode 100644 drivers/crypto/qat/qat_qp.c
 create mode 100644 drivers/crypto/qat/rte_pmd_qat_version.map
 create mode 100644 drivers/crypto/qat/rte_qat_cryptodev.c

diff --git a/config/common_bsdapp b/config/common_bsdapp
index ed30180..8fcc004 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -154,6 +154,19 @@ CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
 CONFIG_RTE_MAX_CRYPTOPORTS=32
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_QAT_DEBUG_TX=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_RX=y
+CONFIG_RTE_LIBRTE_QAT_DEBUG_DRIVER=y
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_MAX_QAT_SESSIONS=200
+
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 12a75c6..7199c95 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -152,6 +152,19 @@ CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y
 CONFIG_RTE_MAX_CRYPTODEVS=64
 
 #
+# Compile PMD for QuickAssist based devices
+#
+CONFIG_RTE_LIBRTE_PMD_QAT=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=y
+CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=y
+#
+# Number of sessions to create in the session memory pool
+# on a single QuickAssist device.
+#
+CONFIG_RTE_LIBRTE_PMD_QAT_MAX_SESSIONS=4096
+
 # Support NIC bypass logic
 #
 CONFIG_RTE_NIC_BYPASS=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
new file mode 100644
index 0000000..1c31697
--- /dev/null
+++ b/doc/guides/cryptodevs/index.rst
@@ -0,0 +1,42 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Crypto Device Drivers
+====================================
+
+|today|
+
+
+**Contents**
+
+.. toctree::
+    :maxdepth: 2
+    :numbered:
+
+    qat
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
new file mode 100644
index 0000000..e09145d
--- /dev/null
+++ b/doc/guides/cryptodevs/qat.rst
@@ -0,0 +1,155 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Quick Assist Crypto Poll Mode Driver
+====================================
+
+
+The QAT PMD provides poll mode crypto driver support for **Intel
+QuickAssist Technology DH895xxC hardware accelerator. QAT PMD has
+current only been tested on Fedora 21 64-bit with gcc.
+
+Features
+--------
+
+QAT PMD has support for:
+
+Cipher algorithms:
+
+* RTE_CRYPTO_SYM_CIPHER_AES128_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES256_CBC
+* RTE_CRYPTO_SYM_CIPHER_AES512_CBC
+
+Hash algorithms:
+
+* RTE_CRYPTO_SYM_HASH_SHA1_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA256_HMAC
+* RTE_CRYPTO_SYM_HASH_SHA512_HMAC
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+* Hash only is not supported.
+* Cipher only is not supported.
+* Only in-place is currently supported (destination address is the same as source address).
+* Only supports session-oriented API implementation (session-less APIs are not supported).
+* Not performance tuned.
+
+Installation
+------------
+
+To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required. 
+The VF devices exposed by this driver will be used by QAT PMD
+Future kernel versions will provide this as standard, in the interim the 
+following steps are necessary to load this driver.
+
+
+Download the latest QuickAssist Technology Driver from 01.org
+https://01.org/packet-processing/intel%C2%AE-quickassist-technology-drivers-and-patches
+Consult the Getting Started Guide at the same URL for further information.
+
+Steps below assume 
+  * building on a platform with one DH895xCC device
+  * using package qatmux.l.2.3.0-34.tgz
+  * on Fedora21 kernel 3.17.4-301.fc21.x86_64
+
+In BIOS ensure that SRIOV is enabled and VT-d is disabled.
+
+Uninstall any existing QAT driver, e.g. by running 
+  *  "./installer.sh uninstall" in the directory where originally installed
+     or
+  *  "rmmod qat_dh895xcc; rmmod intel_qat"	  
+
+Build and install the SRIOV-enabled QAT driver
+
+.. code-block:: console
+  
+    "mkdir /QAT; cd /QAT"
+    copy qatmux.l.2.3.0-34.tgz to this location
+    "tar zxof qatmux.l.2.3.0-34.tgz"
+    "export ICP_WITHOUT_IOMMU=1"
+    "./installer.sh install QAT1.6 host"
+
+You can use "cat /proc/icp_dh895xcc_dev0/version" to confirm the driver is correctly installed.
+You can use "lspci -d:443" to confirm the bdf of the 32 VF devices available per DH895xCC device. 
+
+The unbind command below assumes bdfs of 02:01.00-02:04.07, if yours are different adjust the unbind command below. 
+
+Make available to DPDK
+
+.. code-block:: console
+
+   cd $(RTE_SDK) (See http://dpdk.org/doc/quick-start to install DPDK)
+   "modprobe uio"
+   "insmod ./build/kmod/igb_uio.ko"
+   "for device in $(seq 1 4); do for fn in $(seq 0 7); do echo -n 0000:02:0${device}.${fn} > /sys/bus/pci/devices/0000\:02\:0${device}.${fn}/driver/unbind;done ;done"
+   "echo "8086 0443" > /sys/bus/pci/drivers/igb_uio/new_id"
+  
+You can use "lspci -vvd:443" to confirm that all devices are now in use by igb_uio kernel driver
+
+
+Notes:
+If using a later kernel and the build fails with an error relating to strict_stroul not being available patch the following file:  
+ 
+.. code-block:: console
+
+  /QAT/QAT1.6/quickassist/utilities/downloader/Target_CoreLibs/uclo/include/linux/uclo_platform.h
+  + #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,18,5)
+  + #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (kstrtoul((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  + #else
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,38)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; if (strict_strtoull((str), (base), (num))) printk("Error strtoull convert %s\n", str); }
+  #else 
+  #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25)
+  #define STR_TO_64(str, base, num, endPtr) {endPtr=NULL; strict_strtoll((str), (base), (num));}
+  #else
+  #define STR_TO_64(str, base, num, endPtr)                                 \
+       do {                                                               \
+             if (str[0] == '-')                                           \
+             {                                                            \
+                  *(num) = -(simple_strtoull((str+1), &(endPtr), (base))); \
+             }else {                                                      \
+                  *(num) = simple_strtoull((str), &(endPtr), (base));      \
+             }                                                            \
+       } while(0)
+  + #endif
+  #endif
+  #endif
+
+
+If build fails due to missing header files you may need to do following:
+  *  sudo yum install zlib-devel
+  *  sudo yum install openssl-devel
+
+If build or install fails due to mismatching kernel sources you may need to do the following:
+  *  sudo yum install kernel-headers-`uname -r`
+  *  sudo yum install kernel-src-`uname -r`
+  *  sudo yum install kernel-devel-`uname -r`
+
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 439c7e3..c5d7a9f 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -42,6 +42,7 @@ Contents:
    xen/index
    prog_guide/index
    nics/index
+   cryptodevs/index
    sample_app_ug/index
    testpmd_app_ug/index
    faq/index
diff --git a/drivers/Makefile b/drivers/Makefile
index b60eb5e..6ec67f6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,5 +32,6 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
+DIRS-y += crypto
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
new file mode 100644
index 0000000..eeb998e
--- /dev/null
+++ b/drivers/crypto/Makefile
@@ -0,0 +1,38 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+
+include $(RTE_SDK)/mk/rte.sharelib.mk
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/qat/Makefile b/drivers/crypto/qat/Makefile
new file mode 100644
index 0000000..e027ff9
--- /dev/null
+++ b/drivers/crypto/qat/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_qat.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += $(WERROR_FLAGS)
+
+# external library include paths
+CFLAGS += -I$(SRCDIR)/qat_adf
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_crypto.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_qp.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat_adf/qat_algs_build_desc.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += rte_qat_cryptodev.c
+
+# export include files
+SYMLINK-y-include +=
+
+# versioning export map
+EXPORT_MAP := rte_pmd_qat_version.map
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += lib/librte_cryptodev
+
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
new file mode 100644
index 0000000..d2b79c6
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/adf_transport_access_macros.h
@@ -0,0 +1,173 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef ADF_TRANSPORT_ACCESS_MACROS_H
+#define ADF_TRANSPORT_ACCESS_MACROS_H
+
+/* CSR write macro */
+#define ADF_CSR_WR(csrAddr, csrOffset, val) \
+	(void)((*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)) = (val)))
+
+/* CSR read macro */
+#define ADF_CSR_RD(csrAddr, csrOffset) \
+	(*((volatile uint32_t *)(((uint8_t *)csrAddr) + csrOffset)))
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE	0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A
+#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05
+#define ADF_COALESCING_MIN_TIME 0x1FF
+#define ADF_COALESCING_MAX_TIME 0xFFFFF
+#define ADF_COALESCING_DEF_TIME 0x27FF
+#define ADF_RING_NEAR_WATERMARK_512 0x08
+#define ADF_RING_NEAR_WATERMARK_0 0x00
+#define ADF_RING_EMPTY_SIG 0x7F7F7F7F
+
+/* Valid internal ring size values */
+#define ADF_RING_SIZE_128 0x01
+#define ADF_RING_SIZE_256 0x02
+#define ADF_RING_SIZE_512 0x03
+#define ADF_RING_SIZE_4K 0x06
+#define ADF_RING_SIZE_16K 0x08
+#define ADF_RING_SIZE_4M 0x10
+#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
+#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
+#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+
+#define ADF_NUM_BUNDLES_PER_DEV         1
+#define ADF_NUM_SYM_QPS_PER_BUNDLE      2
+
+/* Valid internal msg size values */
+#define ADF_MSG_SIZE_32 0x01
+#define ADF_MSG_SIZE_64 0x02
+#define ADF_MSG_SIZE_128 0x04
+#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32
+#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128
+
+/* Size to bytes conversion macros for ring and msg size values */
+#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5)
+#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5)
+#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7)
+#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
+
+/* Minimum ring bufer size for memory allocation */
+#define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \
+				ADF_RING_SIZE_4K : SIZE)
+#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6)
+#define ADF_SIZE_TO_POW(SIZE) ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | \
+				SIZE) & ~0x4)
+/* Max outstanding requests */
+#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \
+	((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1)
+#define BUILD_RING_CONFIG(size)	\
+	((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) \
+	| (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \
+	((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM)	\
+	| (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \
+	| size)
+#define BUILD_RING_BASE_ADDR(addr, size) \
+	((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_HEAD + (ring << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_RING_TAIL + (ring << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+	ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_CONFIG + (ring << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+	uint32_t l_base = 0, u_base = 0; \
+	l_base = (uint32_t)(value & 0xFFFFFFFF); \
+	u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_LBASE + (ring << 2), l_base);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_UBASE + (ring << 2), u_base);	\
+} while (0)
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_HEAD + (ring << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+		ADF_RING_CSR_RING_TAIL + (ring << 2), value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0);	\
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+	ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_COL_CTL, \
+			ADF_RING_CSR_INT_COL_CTL_ENABLE | value)
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+	ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \
+			ADF_RING_CSR_INT_FLAG_AND_COL, value)
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw.h b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
new file mode 100644
index 0000000..cc96d45
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw.h
@@ -0,0 +1,316 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_H_
+#define _ICP_QAT_FW_H_
+#include <linux/types.h>
+#include "icp_qat_hw.h"
+
+#define QAT_FIELD_SET(flags, val, bitpos, mask) \
+{ (flags) = (((flags) & (~((mask) << (bitpos)))) | \
+		(((val) & (mask)) << (bitpos))) ; }
+
+#define QAT_FIELD_GET(flags, bitpos, mask) \
+	(((flags) >> (bitpos)) & (mask))
+
+#define ICP_QAT_FW_REQ_DEFAULT_SZ 128
+#define ICP_QAT_FW_RESP_DEFAULT_SZ 32
+#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8
+#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF
+#define ICP_QAT_FW_NUM_LONGWORDS_1 1
+#define ICP_QAT_FW_NUM_LONGWORDS_2 2
+#define ICP_QAT_FW_NUM_LONGWORDS_3 3
+#define ICP_QAT_FW_NUM_LONGWORDS_4 4
+#define ICP_QAT_FW_NUM_LONGWORDS_5 5
+#define ICP_QAT_FW_NUM_LONGWORDS_6 6
+#define ICP_QAT_FW_NUM_LONGWORDS_7 7
+#define ICP_QAT_FW_NUM_LONGWORDS_10 10
+#define ICP_QAT_FW_NUM_LONGWORDS_13 13
+#define ICP_QAT_FW_NULL_REQ_SERV_ID 1
+
+enum icp_qat_fw_comn_resp_serv_id {
+	ICP_QAT_FW_COMN_RESP_SERV_NULL,
+	ICP_QAT_FW_COMN_RESP_SERV_CPM_FW,
+	ICP_QAT_FW_COMN_RESP_SERV_DELIMITER
+};
+
+enum icp_qat_fw_comn_request_id {
+	ICP_QAT_FW_COMN_REQ_NULL = 0,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7,
+	ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9,
+	ICP_QAT_FW_COMN_REQ_DELIMITER
+};
+
+struct icp_qat_fw_comn_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t serv_specif_fields[4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_comn_req_mid {
+	uint64_t opaque_data;
+	uint64_t src_data_addr;
+	uint64_t dest_data_addr;
+	uint32_t src_length;
+	uint32_t dst_length;
+};
+
+struct icp_qat_fw_comn_req_cd_ctrl {
+	uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5];
+};
+
+struct icp_qat_fw_comn_req_hdr {
+	uint8_t resrvd1;
+	uint8_t service_cmd_id;
+	uint8_t service_type;
+	uint8_t hdr_flags;
+	uint16_t serv_specif_flags;
+	uint16_t comn_req_flags;
+};
+
+struct icp_qat_fw_comn_req_rqpars {
+	uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13];
+};
+
+struct icp_qat_fw_comn_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+struct icp_qat_fw_comn_error {
+	uint8_t xlat_err_code;
+	uint8_t cmp_err_code;
+};
+
+struct icp_qat_fw_comn_resp_hdr {
+	uint8_t resrvd1;
+	uint8_t service_id;
+	uint8_t response_type;
+	uint8_t hdr_flags;
+	struct icp_qat_fw_comn_error comn_error;
+	uint8_t comn_status;
+	uint8_t cmd_id;
+};
+
+struct icp_qat_fw_comn_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_hdr;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1
+#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7
+#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_type
+
+#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_type = val
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id
+
+#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \
+	icp_qat_fw_comn_req_hdr_t.service_cmd_id = val
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \
+	ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags)
+
+#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \
+	ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \
+	QAT_FIELD_GET(hdr_flags, \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \
+	(hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK)
+
+#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \
+	QAT_FIELD_SET((hdr_t.hdr_flags), (val), \
+	ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \
+	ICP_QAT_FW_COMN_VALID_FLAG_MASK)
+
+#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \
+	(((valid) & ICP_QAT_FW_COMN_VALID_FLAG_MASK) << \
+	 ICP_QAT_FW_COMN_VALID_FLAG_BITPOS)
+
+#define QAT_COMN_PTR_TYPE_BITPOS 0
+#define QAT_COMN_PTR_TYPE_MASK 0x1
+#define QAT_COMN_CD_FLD_TYPE_BITPOS 1
+#define QAT_COMN_CD_FLD_TYPE_MASK 0x1
+#define QAT_COMN_PTR_TYPE_FLAT 0x0
+#define QAT_COMN_PTR_TYPE_SGL 0x1
+#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
+#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
+
+#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
+	((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
+	 | (((ptr) & QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS))
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_PTR_TYPE_BITPOS, \
+			QAT_COMN_PTR_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_COMN_CD_FLD_TYPE_BITPOS, \
+			QAT_COMN_CD_FLD_TYPE_MASK)
+
+#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4
+#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0
+#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0
+#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F
+
+#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	 & ICP_QAT_FW_COMN_NEXT_ID_MASK)); }
+
+#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+	{ ((cd_ctrl_hdr_t)->next_curr_id) = ((((cd_ctrl_hdr_t)->next_curr_id) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)); }
+
+#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7
+#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5
+#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1
+#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4
+#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3
+#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1
+
+#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \
+	((((crypto) & QAT_COMN_RESP_CRYPTO_STATUS_MASK) << \
+	QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \
+	(((comp) & QAT_COMN_RESP_CMP_STATUS_MASK) << \
+	QAT_COMN_RESP_CMP_STATUS_BITPOS) | \
+	(((xlat) & QAT_COMN_RESP_XLAT_STATUS_MASK) << \
+	QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \
+	(((eolb) & QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) << \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS))
+
+#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \
+	QAT_COMN_RESP_CRYPTO_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_STATUS_BITPOS, \
+	QAT_COMN_RESP_CMP_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_XLAT_STATUS_BITPOS, \
+	QAT_COMN_RESP_XLAT_STATUS_MASK)
+
+#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \
+	QAT_FIELD_GET(status, QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \
+	QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK)
+
+#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0
+#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0
+#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1
+#define ERR_CODE_NO_ERROR 0
+#define ERR_CODE_INVALID_BLOCK_TYPE -1
+#define ERR_CODE_NO_MATCH_ONES_COMP -2
+#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3
+#define ERR_CODE_INCOMPLETE_LEN -4
+#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5
+#define ERR_CODE_RPT_GT_SPEC_LEN -6
+#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7
+#define ERR_CODE_INV_DIS_CODE_LEN -8
+#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9
+#define ERR_CODE_DIS_TOO_FAR_BACK -10
+#define ERR_CODE_OVERFLOW_ERROR -11
+#define ERR_CODE_SOFT_ERROR -12
+#define ERR_CODE_FATAL_ERROR -13
+#define ERR_CODE_SSM_ERROR -14
+#define ERR_CODE_ENDPOINT_ERROR -15
+
+enum icp_qat_fw_slice {
+	ICP_QAT_FW_SLICE_NULL = 0,
+	ICP_QAT_FW_SLICE_CIPHER = 1,
+	ICP_QAT_FW_SLICE_AUTH = 2,
+	ICP_QAT_FW_SLICE_DRAM_RD = 3,
+	ICP_QAT_FW_SLICE_DRAM_WR = 4,
+	ICP_QAT_FW_SLICE_COMP = 5,
+	ICP_QAT_FW_SLICE_XLAT = 6,
+	ICP_QAT_FW_SLICE_DELIMITER
+};
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
new file mode 100644
index 0000000..7671465
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_fw_la.h
@@ -0,0 +1,404 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_FW_LA_H_
+#define _ICP_QAT_FW_LA_H_
+#include "icp_qat_fw.h"
+
+enum icp_qat_fw_la_cmd_id {
+	ICP_QAT_FW_LA_CMD_CIPHER = 0,
+	ICP_QAT_FW_LA_CMD_AUTH = 1,
+	ICP_QAT_FW_LA_CMD_CIPHER_HASH = 2,
+	ICP_QAT_FW_LA_CMD_HASH_CIPHER = 3,
+	ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM = 4,
+	ICP_QAT_FW_LA_CMD_TRNG_TEST = 5,
+	ICP_QAT_FW_LA_CMD_SSL3_KEY_DERIVE = 6,
+	ICP_QAT_FW_LA_CMD_TLS_V1_1_KEY_DERIVE = 7,
+	ICP_QAT_FW_LA_CMD_TLS_V1_2_KEY_DERIVE = 8,
+	ICP_QAT_FW_LA_CMD_MGF1 = 9,
+	ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10,
+	ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11,
+	ICP_QAT_FW_LA_CMD_DELIMITER = 12
+};
+
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_ICV_VER_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+#define ICP_QAT_FW_LA_TRNG_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK
+#define ICP_QAT_FW_LA_TRNG_STATUS_FAIL ICP_QAT_FW_COMN_STATUS_FLAG_ERROR
+
+struct icp_qat_fw_la_bulk_req {
+	struct icp_qat_fw_comn_req_hdr comn_hdr;
+	struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars;
+	struct icp_qat_fw_comn_req_mid comn_mid;
+	struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars;
+	struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl;
+};
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS 1
+#define ICP_QAT_FW_LA_GCM_IV_LEN_NOT_12_OCTETS 0
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS 12
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO 1
+#define QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK 0x1
+#define QAT_LA_GCM_IV_LEN_FLAG_BITPOS 11
+#define QAT_LA_GCM_IV_LEN_FLAG_MASK 0x1
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER 1
+#define ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER 0
+#define QAT_LA_DIGEST_IN_BUFFER_BITPOS	10
+#define QAT_LA_DIGEST_IN_BUFFER_MASK 0x1
+#define ICP_QAT_FW_LA_SNOW_3G_PROTO 4
+#define ICP_QAT_FW_LA_GCM_PROTO	2
+#define ICP_QAT_FW_LA_CCM_PROTO	1
+#define ICP_QAT_FW_LA_NO_PROTO 0
+#define QAT_LA_PROTO_BITPOS 7
+#define QAT_LA_PROTO_MASK 0x7
+#define ICP_QAT_FW_LA_CMP_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_CMP_AUTH_RES 0
+#define QAT_LA_CMP_AUTH_RES_BITPOS 6
+#define QAT_LA_CMP_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_RET_AUTH_RES 1
+#define ICP_QAT_FW_LA_NO_RET_AUTH_RES 0
+#define QAT_LA_RET_AUTH_RES_BITPOS 5
+#define QAT_LA_RET_AUTH_RES_MASK 0x1
+#define ICP_QAT_FW_LA_UPDATE_STATE 1
+#define ICP_QAT_FW_LA_NO_UPDATE_STATE 0
+#define QAT_LA_UPDATE_STATE_BITPOS 4
+#define QAT_LA_UPDATE_STATE_MASK 0x1
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_CD_SETUP 0
+#define ICP_QAT_FW_CIPH_AUTH_CFG_OFFSET_IN_SHRAM_CP 1
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS 3
+#define QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK 0x1
+#define ICP_QAT_FW_CIPH_IV_64BIT_PTR 0
+#define ICP_QAT_FW_CIPH_IV_16BYTE_DATA 1
+#define QAT_LA_CIPH_IV_FLD_BITPOS 2
+#define QAT_LA_CIPH_IV_FLD_MASK   0x1
+#define ICP_QAT_FW_LA_PARTIAL_NONE 0
+#define ICP_QAT_FW_LA_PARTIAL_START 1
+#define ICP_QAT_FW_LA_PARTIAL_MID 3
+#define ICP_QAT_FW_LA_PARTIAL_END 2
+#define QAT_LA_PARTIAL_BITPOS 0
+#define QAT_LA_PARTIAL_MASK 0x3
+#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
+	cmp_auth, ret_auth, update_state, \
+	ciph_iv, ciphcfg, partial) \
+	(((zuc_proto & QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK) << \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS) | \
+	((gcm_iv_len & QAT_LA_GCM_IV_LEN_FLAG_MASK) << \
+	QAT_LA_GCM_IV_LEN_FLAG_BITPOS) | \
+	((auth_rslt & QAT_LA_DIGEST_IN_BUFFER_MASK) << \
+	QAT_LA_DIGEST_IN_BUFFER_BITPOS) | \
+	((proto & QAT_LA_PROTO_MASK) << \
+	QAT_LA_PROTO_BITPOS)	| \
+	((cmp_auth & QAT_LA_CMP_AUTH_RES_MASK) << \
+	QAT_LA_CMP_AUTH_RES_BITPOS) | \
+	((ret_auth & QAT_LA_RET_AUTH_RES_MASK) << \
+	QAT_LA_RET_AUTH_RES_BITPOS) | \
+	((update_state & QAT_LA_UPDATE_STATE_MASK) << \
+	QAT_LA_UPDATE_STATE_BITPOS) | \
+	((ciph_iv & QAT_LA_CIPH_IV_FLD_MASK) << \
+	QAT_LA_CIPH_IV_FLD_BITPOS) | \
+	((ciphcfg & QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK) << \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS) | \
+	((partial & QAT_LA_PARTIAL_MASK) << \
+	QAT_LA_PARTIAL_BITPOS))
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PROTO_BITPOS, QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_GET(flags) \
+	QAT_FIELD_GET(flags, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_IV_FLD_BITPOS, \
+	QAT_LA_CIPH_IV_FLD_MASK)
+
+#define ICP_QAT_FW_LA_CIPH_AUTH_CFG_OFFSET_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CIPH_AUTH_CFG_OFFSET_BITPOS, \
+	QAT_LA_CIPH_AUTH_CFG_OFFSET_MASK)
+
+#define ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_FW_LA_ZUC_3G_PROTO_FLAG_BITPOS, \
+	QAT_FW_LA_ZUC_3G_PROTO_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_GCM_IV_LEN_FLAG_BITPOS, \
+	QAT_LA_GCM_IV_LEN_FLAG_MASK)
+
+#define ICP_QAT_FW_LA_PROTO_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PROTO_BITPOS, \
+	QAT_LA_PROTO_MASK)
+
+#define ICP_QAT_FW_LA_CMP_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_CMP_AUTH_RES_BITPOS, \
+	QAT_LA_CMP_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_RET_AUTH_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_RET_AUTH_RES_BITPOS, \
+	QAT_LA_RET_AUTH_RES_MASK)
+
+#define ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_DIGEST_IN_BUFFER_BITPOS, \
+	QAT_LA_DIGEST_IN_BUFFER_MASK)
+
+#define ICP_QAT_FW_LA_UPDATE_STATE_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_UPDATE_STATE_BITPOS, \
+	QAT_LA_UPDATE_STATE_MASK)
+
+#define ICP_QAT_FW_LA_PARTIAL_SET(flags, val) \
+	QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
+	QAT_LA_PARTIAL_MASK)
+
+struct icp_qat_fw_cipher_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} s1;
+	} u;
+};
+
+struct icp_qat_fw_cipher_auth_req_hdr_cd_pars {
+	union {
+		struct {
+			uint64_t content_desc_addr;
+			uint16_t content_desc_resrvd1;
+			uint8_t content_desc_params_sz;
+			uint8_t content_desc_hdr_resrvd2;
+			uint32_t content_desc_resrvd3;
+		} s;
+		struct {
+			uint32_t cipher_key_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		} sl;
+	} u;
+};
+
+struct icp_qat_fw_cipher_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t cipher_padding_sz;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+	uint32_t resrvd3[ICP_QAT_FW_NUM_LONGWORDS_3];
+};
+
+struct icp_qat_fw_auth_cd_ctrl_hdr {
+	uint32_t resrvd1;
+	uint8_t resrvd2;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id;
+	uint8_t resrvd3;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd4;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+struct icp_qat_fw_cipher_auth_cd_ctrl_hdr {
+	uint8_t cipher_state_sz;
+	uint8_t cipher_key_sz;
+	uint8_t cipher_cfg_offset;
+	uint8_t next_curr_id_cipher;
+	uint8_t cipher_padding_sz;
+	uint8_t hash_flags;
+	uint8_t hash_cfg_offset;
+	uint8_t next_curr_id_auth;
+	uint8_t resrvd1;
+	uint8_t outer_prefix_sz;
+	uint8_t final_sz;
+	uint8_t inner_res_sz;
+	uint8_t resrvd2;
+	uint8_t inner_state1_sz;
+	uint8_t inner_state2_offset;
+	uint8_t inner_state2_sz;
+	uint8_t outer_config_offset;
+	uint8_t outer_state1_sz;
+	uint8_t outer_res_sz;
+	uint8_t outer_prefix_offset;
+};
+
+#define ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED 1
+#define ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED 0
+#define ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX	240
+#define ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET \
+	(sizeof(struct icp_qat_fw_la_cipher_req_params_t))
+#define ICP_QAT_FW_CIPHER_REQUEST_PARAMETERS_OFFSET (0)
+
+struct icp_qat_fw_la_cipher_req_params {
+	uint32_t cipher_offset;
+	uint32_t cipher_length;
+	union {
+		uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4];
+		struct {
+			uint64_t cipher_IV_ptr;
+			uint64_t resrvd1;
+		} s;
+	} u;
+};
+
+struct icp_qat_fw_la_auth_req_params {
+	uint32_t auth_off;
+	uint32_t auth_len;
+	union {
+		uint64_t auth_partial_st_prefix;
+		uint64_t aad_adr;
+	} u1;
+	uint64_t auth_res_addr;
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint8_t hash_state_sz;
+	uint8_t auth_res_sz;
+} __rte_packed;
+
+struct icp_qat_fw_la_auth_req_params_resrvd_flds {
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_6];
+	union {
+		uint8_t inner_prefix_sz;
+		uint8_t aad_sz;
+	} u2;
+	uint8_t resrvd1;
+	uint16_t resrvd2;
+};
+
+struct icp_qat_fw_la_resp {
+	struct icp_qat_fw_comn_resp_hdr comn_resp;
+	uint64_t opaque_data;
+	uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4];
+};
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) & \
+	  ICP_QAT_FW_COMN_NEXT_ID_MASK) >> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_CIPHER_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_CIPHER_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_cipher = \
+	((((cd_ctrl_hdr_t)->next_curr_id_cipher) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_GET(cd_ctrl_hdr_t) \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) & ICP_QAT_FW_COMN_NEXT_ID_MASK) \
+	>> (ICP_QAT_FW_COMN_NEXT_ID_BITPOS))
+
+#define ICP_QAT_FW_AUTH_NEXT_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK) | \
+	((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK)) }
+
+#define ICP_QAT_FW_AUTH_CURR_ID_GET(cd_ctrl_hdr_t) \
+	(((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_CURR_ID_MASK)
+
+#define ICP_QAT_FW_AUTH_CURR_ID_SET(cd_ctrl_hdr_t, val) \
+{ (cd_ctrl_hdr_t)->next_curr_id_auth = \
+	((((cd_ctrl_hdr_t)->next_curr_id_auth) \
+	& ICP_QAT_FW_COMN_NEXT_ID_MASK) | \
+	((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) }
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/icp_qat_hw.h b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
new file mode 100644
index 0000000..7f68557
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/icp_qat_hw.h
@@ -0,0 +1,305 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_HW_H_
+#define _ICP_QAT_HW_H_
+
+enum icp_qat_hw_ae_id {
+	ICP_QAT_HW_AE_0 = 0,
+	ICP_QAT_HW_AE_1 = 1,
+	ICP_QAT_HW_AE_2 = 2,
+	ICP_QAT_HW_AE_3 = 3,
+	ICP_QAT_HW_AE_4 = 4,
+	ICP_QAT_HW_AE_5 = 5,
+	ICP_QAT_HW_AE_6 = 6,
+	ICP_QAT_HW_AE_7 = 7,
+	ICP_QAT_HW_AE_8 = 8,
+	ICP_QAT_HW_AE_9 = 9,
+	ICP_QAT_HW_AE_10 = 10,
+	ICP_QAT_HW_AE_11 = 11,
+	ICP_QAT_HW_AE_DELIMITER = 12
+};
+
+enum icp_qat_hw_qat_id {
+	ICP_QAT_HW_QAT_0 = 0,
+	ICP_QAT_HW_QAT_1 = 1,
+	ICP_QAT_HW_QAT_2 = 2,
+	ICP_QAT_HW_QAT_3 = 3,
+	ICP_QAT_HW_QAT_4 = 4,
+	ICP_QAT_HW_QAT_5 = 5,
+	ICP_QAT_HW_QAT_DELIMITER = 6
+};
+
+enum icp_qat_hw_auth_algo {
+	ICP_QAT_HW_AUTH_ALGO_NULL = 0,
+	ICP_QAT_HW_AUTH_ALGO_SHA1 = 1,
+	ICP_QAT_HW_AUTH_ALGO_MD5 = 2,
+	ICP_QAT_HW_AUTH_ALGO_SHA224 = 3,
+	ICP_QAT_HW_AUTH_ALGO_SHA256 = 4,
+	ICP_QAT_HW_AUTH_ALGO_SHA384 = 5,
+	ICP_QAT_HW_AUTH_ALGO_SHA512 = 6,
+	ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7,
+	ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8,
+	ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10,
+	ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11,
+	ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12,
+	ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13,
+	ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14,
+	ICP_QAT_HW_AUTH_RESERVED_1 = 15,
+	ICP_QAT_HW_AUTH_RESERVED_2 = 16,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
+	ICP_QAT_HW_AUTH_RESERVED_3 = 18,
+	ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
+	ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+};
+
+enum icp_qat_hw_auth_mode {
+	ICP_QAT_HW_AUTH_MODE0 = 0,
+	ICP_QAT_HW_AUTH_MODE1 = 1,
+	ICP_QAT_HW_AUTH_MODE2 = 2,
+	ICP_QAT_HW_AUTH_MODE_DELIMITER = 3
+};
+
+struct icp_qat_hw_auth_config {
+	uint32_t config;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_MODE_BITPOS 4
+#define QAT_AUTH_MODE_MASK 0xF
+#define QAT_AUTH_ALGO_BITPOS 0
+#define QAT_AUTH_ALGO_MASK 0xF
+#define QAT_AUTH_CMP_BITPOS 8
+#define QAT_AUTH_CMP_MASK 0x7F
+#define QAT_AUTH_SHA3_PADDING_BITPOS 16
+#define QAT_AUTH_SHA3_PADDING_MASK 0x1
+#define QAT_AUTH_ALGO_SHA3_BITPOS 22
+#define QAT_AUTH_ALGO_SHA3_MASK 0x3
+#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \
+	(((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \
+	((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \
+	(((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) << \
+	 QAT_AUTH_ALGO_SHA3_BITPOS) | \
+	 (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \
+	(algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? 1 : 0) \
+	& QAT_AUTH_SHA3_PADDING_MASK) << QAT_AUTH_SHA3_PADDING_BITPOS) | \
+	((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS))
+
+struct icp_qat_hw_auth_counter {
+	uint32_t counter;
+	uint32_t reserved;
+};
+
+#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF
+#define QAT_AUTH_COUNT_BITPOS 0
+#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \
+	(((val) & QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS)
+
+struct icp_qat_hw_auth_setup {
+	struct icp_qat_hw_auth_config auth_config;
+	struct icp_qat_hw_auth_counter auth_counter;
+};
+
+#define QAT_HW_DEFAULT_ALIGNMENT 8
+#define QAT_HW_ROUND_UP(val, n) (((val) + ((n) - 1)) & (~(n - 1)))
+#define ICP_QAT_HW_NULL_STATE1_SZ 32
+#define ICP_QAT_HW_MD5_STATE1_SZ 16
+#define ICP_QAT_HW_SHA1_STATE1_SZ 20
+#define ICP_QAT_HW_SHA224_STATE1_SZ 32
+#define ICP_QAT_HW_SHA256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32
+#define ICP_QAT_HW_SHA384_STATE1_SZ 64
+#define ICP_QAT_HW_SHA512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64
+#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28
+#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48
+#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16
+#define ICP_QAT_HW_AES_F9_STATE1_SZ 32
+#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16
+#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_NULL_STATE2_SZ 32
+#define ICP_QAT_HW_MD5_STATE2_SZ 16
+#define ICP_QAT_HW_SHA1_STATE2_SZ 20
+#define ICP_QAT_HW_SHA224_STATE2_SZ 32
+#define ICP_QAT_HW_SHA256_STATE2_SZ 32
+#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0
+#define ICP_QAT_HW_SHA384_STATE2_SZ 64
+#define ICP_QAT_HW_SHA512_STATE2_SZ 64
+#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0
+#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0
+#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16
+#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16
+#define ICP_QAT_HW_F9_IK_SZ 16
+#define ICP_QAT_HW_F9_FK_SZ 16
+#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ (ICP_QAT_HW_F9_IK_SZ + \
+	ICP_QAT_HW_F9_FK_SZ)
+#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
+#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
+#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_GALOIS_H_SZ 16
+#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
+#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+
+struct icp_qat_hw_auth_sha512 {
+	struct icp_qat_hw_auth_setup inner_setup;
+	uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ];
+	struct icp_qat_hw_auth_setup outer_setup;
+	uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ];
+};
+
+struct icp_qat_hw_auth_algo_blk {
+	struct icp_qat_hw_auth_sha512 sha;
+};
+
+#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0
+#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF
+
+enum icp_qat_hw_cipher_algo {
+	ICP_QAT_HW_CIPHER_ALGO_NULL = 0,
+	ICP_QAT_HW_CIPHER_ALGO_DES = 1,
+	ICP_QAT_HW_CIPHER_ALGO_3DES = 2,
+	ICP_QAT_HW_CIPHER_ALGO_AES128 = 3,
+	ICP_QAT_HW_CIPHER_ALGO_AES192 = 4,
+	ICP_QAT_HW_CIPHER_ALGO_AES256 = 5,
+	ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6,
+	ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7,
+	ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8,
+	ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
+	ICP_QAT_HW_CIPHER_DELIMITER = 10
+};
+
+enum icp_qat_hw_cipher_mode {
+	ICP_QAT_HW_CIPHER_ECB_MODE = 0,
+	ICP_QAT_HW_CIPHER_CBC_MODE = 1,
+	ICP_QAT_HW_CIPHER_CTR_MODE = 2,
+	ICP_QAT_HW_CIPHER_F8_MODE = 3,
+	ICP_QAT_HW_CIPHER_XTS_MODE = 6,
+	ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7
+};
+
+struct icp_qat_hw_cipher_config {
+	uint32_t val;
+	uint32_t reserved;
+};
+
+enum icp_qat_hw_cipher_dir {
+	ICP_QAT_HW_CIPHER_ENCRYPT = 0,
+	ICP_QAT_HW_CIPHER_DECRYPT = 1,
+};
+
+enum icp_qat_hw_cipher_convert {
+	ICP_QAT_HW_CIPHER_NO_CONVERT = 0,
+	ICP_QAT_HW_CIPHER_KEY_CONVERT = 1,
+};
+
+#define QAT_CIPHER_MODE_BITPOS 4
+#define QAT_CIPHER_MODE_MASK 0xF
+#define QAT_CIPHER_ALGO_BITPOS 0
+#define QAT_CIPHER_ALGO_MASK 0xF
+#define QAT_CIPHER_CONVERT_BITPOS 9
+#define QAT_CIPHER_CONVERT_MASK 0x1
+#define QAT_CIPHER_DIR_BITPOS 8
+#define QAT_CIPHER_DIR_MASK 0x1
+#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2
+#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2
+#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \
+	(((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \
+	((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \
+	((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \
+	((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS))
+#define ICP_QAT_HW_DES_BLK_SZ 8
+#define ICP_QAT_HW_3DES_BLK_SZ 8
+#define ICP_QAT_HW_NULL_BLK_SZ 8
+#define ICP_QAT_HW_AES_BLK_SZ 16
+#define ICP_QAT_HW_KASUMI_BLK_SZ 8
+#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_NULL_KEY_SZ 256
+#define ICP_QAT_HW_DES_KEY_SZ 8
+#define ICP_QAT_HW_3DES_KEY_SZ 24
+#define ICP_QAT_HW_AES_128_KEY_SZ 16
+#define ICP_QAT_HW_AES_192_KEY_SZ 24
+#define ICP_QAT_HW_AES_256_KEY_SZ 32
+#define ICP_QAT_HW_AES_128_F8_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_192_F8_KEY_SZ (ICP_QAT_HW_AES_192_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_F8_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_KASUMI_KEY_SZ 16
+#define ICP_QAT_HW_KASUMI_F8_KEY_SZ (ICP_QAT_HW_KASUMI_KEY_SZ * \
+	QAT_CIPHER_MODE_F8_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_128_XTS_KEY_SZ (ICP_QAT_HW_AES_128_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_AES_256_XTS_KEY_SZ (ICP_QAT_HW_AES_256_KEY_SZ * \
+	QAT_CIPHER_MODE_XTS_KEY_SZ_MULT)
+#define ICP_QAT_HW_ARC4_KEY_SZ 256
+#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16
+#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16
+#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16
+#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2
+#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024
+
+struct icp_qat_hw_cipher_aes256_f8 {
+	struct icp_qat_hw_cipher_config cipher_config;
+	uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ];
+};
+
+struct icp_qat_hw_cipher_algo_blk {
+	struct icp_qat_hw_cipher_aes256_f8 aes;
+} __rte_cache_aligned;
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
new file mode 100644
index 0000000..3968d52
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -0,0 +1,124 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in
+      the documentation and/or other materials provided with the
+      distribution.
+    * Neither the name of Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#ifndef _ICP_QAT_ALGS_H_
+#define _ICP_QAT_ALGS_H_
+#include <rte_memory.h>
+#include "icp_qat_hw.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_NO_CONVERT, \
+					ICP_QAT_HW_CIPHER_ENCRYPT)
+
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
+	ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
+					ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+					ICP_QAT_HW_CIPHER_DECRYPT)
+
+struct qat_alg_buf {
+	uint32_t len;
+	uint32_t resrvd;
+	uint64_t addr;
+} __rte_packed;
+
+struct qat_alg_buf_list {
+	uint64_t resrvd;
+	uint32_t num_bufs;
+	uint32_t num_mapped_bufs;
+	struct qat_alg_buf bufers[];
+} __rte_packed __rte_cache_aligned;
+
+/* Common content descriptor */
+struct qat_alg_cd {
+	struct icp_qat_hw_cipher_algo_blk cipher;
+	struct icp_qat_hw_auth_algo_blk hash;
+} __rte_packed __rte_cache_aligned;
+
+struct qat_session {
+	enum icp_qat_fw_la_cmd_id qat_cmd;
+	enum icp_qat_hw_cipher_algo qat_cipher_alg;
+	enum icp_qat_hw_cipher_dir qat_dir;
+	enum icp_qat_hw_cipher_mode qat_mode;
+	enum icp_qat_hw_auth_algo qat_hash_alg;
+	struct qat_alg_cd cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	uint8_t salt[ICP_QAT_HW_AES_BLK_SZ];
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+struct qat_alg_ablkcipher_cd {
+	struct icp_qat_hw_cipher_algo_blk *cd;
+	phys_addr_t cd_paddr;
+	struct icp_qat_fw_la_bulk_req fw_req;
+	struct qat_crypto_instance *inst;
+	rte_spinlock_t lock;	/* protects this struct */
+};
+
+int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg);
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cd,
+					uint8_t *enckey, uint32_t enckeylen,
+					uint8_t *authkey, uint32_t authkeylen,
+					uint32_t digestsize);
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header);
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cd,
+					int alg, const uint8_t *key,
+					unsigned int keylen);
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
+
+#endif
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
new file mode 100644
index 0000000..7d5c9d3
--- /dev/null
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -0,0 +1,462 @@
+/*
+  This file is provided under a dual BSD/GPLv2 license.  When using or
+  redistributing this file, you may do so under either license.
+
+  GPL LICENSE SUMMARY
+  Copyright(c) 2015 Intel Corporation.
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of version 2 of the GNU General Public License as
+  published by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  Contact Information:
+  qat-linux@intel.com
+
+  BSD LICENSE
+  Copyright(c) 2015 Intel Corporation.
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions
+  are met:
+
+	* Redistributions of source code must retain the above copyright
+	  notice, this list of conditions and the following disclaimer.
+	* Redistributions in binary form must reproduce the above copyright
+	  notice, this list of conditions and the following disclaimer in
+	  the documentation and/or other materials provided with the
+	  distribution.
+	* Neither the name of Intel Corporation nor the names of its
+	  contributors may be used to endorse or promote products derived
+	  from this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include <rte_memcpy.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include "../qat_logs.h"
+#include "qat_algs.h"
+
+#include <openssl/sha.h>	/* Needed to calculate pre-compute values */
+
+
+
+/* returns size in bytes per hash algo for state1 size field in cd_ctrl
+ * This is digest size rounded up to nearest quadword */
+static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA1_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA256_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum state1 size in this case */
+		return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
+						QAT_HW_DEFAULT_ALIGNMENT);
+	default:
+	    PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+	    return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns digest size in bytes  per hash algo */
+static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return ICP_QAT_HW_SHA1_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return ICP_QAT_HW_SHA256_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum digest size in this case */
+		return ICP_QAT_HW_SHA512_STATE1_SZ;
+	default:
+		PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+/* returns block size in byes per hash algo */
+static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
+{
+	switch (qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		return SHA_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		return SHA256_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		return SHA512_CBLOCK;
+	case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
+		/* return maximum block size in this case */
+		return SHA512_CBLOCK;
+	default:
+	    PMD_DRV_LOG(ERR, "invalid hash alg %u", qat_hash_alg);
+		return -EFAULT;
+	};
+	return -EFAULT;
+}
+
+static int partial_hash_sha1(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA_CTX ctx;
+
+	if (!SHA1_Init(&ctx))
+		return -EFAULT;
+	SHA1_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha256(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA256_CTX ctx;
+
+	if (!SHA256_Init(&ctx))
+		return -EFAULT;
+	SHA256_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA256_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_sha512(uint8_t *data_in, uint8_t *data_out)
+{
+	SHA512_CTX ctx;
+
+	if (!SHA512_Init(&ctx))
+		return -EFAULT;
+	SHA512_Transform(&ctx, data_in);
+	rte_memcpy(data_out, &ctx, SHA512_DIGEST_LENGTH);
+	return 0;
+}
+
+static int partial_hash_compute(enum icp_qat_hw_auth_algo hash_alg,
+			uint8_t *data_in,
+			uint8_t *data_out)
+{
+	int digest_size;
+	uint8_t digest[qat_hash_get_digest_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint32_t *hash_state_out_be32;
+	uint64_t *hash_state_out_be64;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	digest_size = qat_hash_get_digest_size(hash_alg);
+	if (digest_size <= 0)
+		return -EFAULT;
+
+	hash_state_out_be32 = (uint32_t *)data_out;
+	hash_state_out_be64 = (uint64_t *)data_out;
+
+	switch (hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		if (partial_hash_sha1(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		if (partial_hash_sha256(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 2; i++, hash_state_out_be32++)
+			*hash_state_out_be32 =
+				rte_bswap32(*(((uint32_t *)digest)+i));
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		if (partial_hash_sha512(data_in, digest))
+			return -EFAULT;
+		for (i = 0; i < digest_size >> 3; i++, hash_state_out_be64++)
+			*hash_state_out_be64 =
+				rte_bswap64(*(((uint64_t *)digest)+i));
+		break;
+	default:
+	    PMD_DRV_LOG(ERR, "invalid hash alg %u", hash_alg);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+#define HMAC_IPAD_VALUE	0x36
+#define HMAC_OPAD_VALUE	0x5c
+
+static int qat_alg_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
+				const uint8_t *auth_key,
+				uint16_t auth_keylen,
+				uint8_t *p_state_buf,
+				uint16_t *p_state_len)
+{
+	int block_size;
+	uint8_t ipad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	uint8_t opad[qat_hash_get_block_size(ICP_QAT_HW_AUTH_ALGO_DELIMITER)];
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+	block_size = qat_hash_get_block_size(hash_alg);
+	if (block_size <= 0)
+		return -EFAULT;
+	/* init ipad and opad from key and xor with fixed values */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+
+	if (auth_keylen > (unsigned int)block_size) {
+		PMD_DRV_LOG(ERR, "invalid keylen %u", auth_keylen);
+		return -EFAULT;
+	} else {
+		rte_memcpy(ipad, auth_key, auth_keylen);
+		rte_memcpy(opad, auth_key, auth_keylen);
+	}
+
+	for (i = 0; i < block_size; i++) {
+		uint8_t *ipad_ptr = ipad + i;
+		uint8_t *opad_ptr = opad + i;
+		*ipad_ptr ^= HMAC_IPAD_VALUE;
+		*opad_ptr ^= HMAC_OPAD_VALUE;
+	}
+
+	/* do partial hash of ipad and copy to state1 */
+	if (partial_hash_compute(hash_alg, ipad, p_state_buf)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "ipad precompute failed");
+		return -EFAULT;
+	}
+
+	/* state len is a multiple of 8, so may be larger than the digest.
+	   Put the partial hash of opad state_len bytes after state1 */
+	*p_state_len = qat_hash_get_state1_size(hash_alg);
+	if (partial_hash_compute(hash_alg, opad, p_state_buf + *p_state_len)) {
+		memset(ipad, 0, block_size);
+		memset(opad, 0, block_size);
+		PMD_DRV_LOG(ERR, "opad precompute failed");
+		return -EFAULT;
+	}
+
+	/*  don't leave data lying around */
+	memset(ipad, 0, block_size);
+	memset(opad, 0, block_size);
+	return 0;
+}
+
+void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header)
+{
+	PMD_INIT_FUNC_TRACE();
+	header->hdr_flags =
+		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
+	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	header->comn_req_flags =
+		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
+					QAT_COMN_PTR_TYPE_FLAT);
+	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+				  ICP_QAT_FW_LA_PARTIAL_NONE);
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
+	ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_PROTO);
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
+}
+
+int qat_alg_aead_session_create_content_desc(struct qat_session *cdesc,
+				uint8_t *cipherkey, uint32_t cipherkeylen,
+				uint8_t *authkey, uint32_t authkeylen,
+				uint32_t digestsize)
+{
+	struct qat_alg_cd *content_desc = &cdesc->cd;
+	struct icp_qat_hw_cipher_algo_blk *cipher = &content_desc->cipher;
+	struct icp_qat_hw_auth_algo_blk *hash = &content_desc->hash;
+	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+	void *ptr = &req_tmpl->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
+	struct icp_qat_fw_la_auth_req_params *auth_param =
+		(struct icp_qat_fw_la_auth_req_params *)
+		((char *)&req_tmpl->serv_specif_rqpars +
+		sizeof(struct icp_qat_fw_la_cipher_req_params));
+	enum icp_qat_hw_cipher_convert key_convert;
+	uint16_t state_size = 0;
+
+	PMD_INIT_FUNC_TRACE();
+	/* CD setup */
+	if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) {
+		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
+	} else {
+		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+				ICP_QAT_FW_LA_NO_RET_AUTH_RES);
+		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+				   ICP_QAT_FW_LA_CMP_AUTH_RES);
+	}
+
+	cipher->aes.cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(cdesc->qat_mode,
+			cdesc->qat_cipher_alg, key_convert, cdesc->qat_dir);
+	memcpy(cipher->aes.key, cipherkey, cipherkeylen);
+
+	hash->sha.inner_setup.auth_config.reserved = 0;
+	hash->sha.inner_setup.auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1,
+				cdesc->qat_hash_alg, digestsize);
+	hash->sha.inner_setup.auth_counter.counter =
+		rte_bswap32(qat_hash_get_block_size(cdesc->qat_hash_alg));
+
+	if (qat_alg_do_precomputes(cdesc->qat_hash_alg,
+		authkey, authkeylen, (uint8_t *)(hash->sha.state1), &state_size)) {
+		PMD_DRV_LOG(ERR, "precomputes failed");
+		return -EFAULT;
+	}
+
+	/* Request template setup */
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = cdesc->qat_cmd;
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+					   ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	cd_pars->u.s.content_desc_params_sz = sizeof(struct qat_alg_cd) >> 3;
+
+	/* Cipher CD config setup */
+	cipher_cd_ctrl->cipher_key_sz = cipherkeylen >> 3;
+	cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cipher_cd_ctrl->cipher_cfg_offset = 0;
+
+	/* Auth CD config setup */
+	hash_cd_ctrl->hash_cfg_offset = ((char *)hash - (char *)cipher) >> 3;
+	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	hash_cd_ctrl->inner_res_sz = digestsize;
+	hash_cd_ctrl->final_sz = digestsize;
+
+	switch (cdesc->qat_hash_alg) {
+	case ICP_QAT_HW_AUTH_ALGO_SHA1:
+		hash_cd_ctrl->inner_state2_sz =
+			RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA256:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA256_STATE2_SZ;
+		break;
+	case ICP_QAT_HW_AUTH_ALGO_SHA512:
+		hash_cd_ctrl->inner_state2_sz = ICP_QAT_HW_SHA512_STATE2_SZ;
+		break;
+	default:
+	    PMD_DRV_LOG(ERR, "invalid HASH alg %u", cdesc->qat_hash_alg);
+		return -EFAULT;
+	}
+	hash_cd_ctrl->inner_state1_sz = state_size;
+	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+			((sizeof(struct icp_qat_hw_auth_setup) +
+			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3);
+	auth_param->auth_res_sz = digestsize;
+
+	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_AUTH);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+	} else {
+	    PMD_DRV_LOG(ERR, "invalid param, only authenticated encryption supported");
+		return -EFAULT;
+	}
+	return 0;
+}
+
+static void qat_alg_ablkcipher_init_com(struct icp_qat_fw_la_bulk_req *req,
+					struct icp_qat_hw_cipher_algo_blk *cd,
+					const uint8_t *key, unsigned int keylen)
+{
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *header = &req->comn_hdr;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_memcpy(cd->aes.key, key, keylen);
+	qat_alg_init_common_hdr(header);
+	header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER;
+	cd_pars->u.s.content_desc_params_sz =
+				sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3;
+	/* Cipher CD config setup */
+	cd_ctrl->cipher_key_sz = keylen >> 3;
+	cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+	cd_ctrl->cipher_cfg_offset = 0;
+	ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_CIPHER);
+	ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR);
+}
+
+void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *enc_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, enc_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_ENC(alg);
+}
+
+void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_cd *cdesc,
+					int alg, const uint8_t *key,
+					unsigned int keylen)
+{
+	struct icp_qat_hw_cipher_algo_blk *dec_cd = cdesc->cd;
+	struct icp_qat_fw_la_bulk_req *req = &cdesc->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars;
+
+	PMD_INIT_FUNC_TRACE();
+	qat_alg_ablkcipher_init_com(req, dec_cd, key, keylen);
+	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
+	dec_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_CBC_DEC(alg);
+}
+
+int qat_alg_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
+{
+	switch (key_len) {
+	case ICP_QAT_HW_AES_128_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
+		break;
+	case ICP_QAT_HW_AES_192_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES192;
+		break;
+	case ICP_QAT_HW_AES_256_KEY_SZ:
+		*alg = ICP_QAT_HW_CIPHER_ALGO_AES256;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000..d026562
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,469 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <strings.h>
+#include <string.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "qat_logs.h"
+#include "qat_algs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift);
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg);
+static void qat_crypto_sessionbuf_init(struct rte_mempool *mp, void *opaque_arg,
+		void *_s, unsigned i);
+
+void qat_crypto_sym_destroy_session(struct rte_cryptodev *dev,
+		struct rte_cryptodev_session *session)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	if (session != NULL && internals->sess_mp != NULL)
+		rte_mempool_put(internals->sess_mp, session);
+}
+
+struct rte_cryptodev_session *
+qat_crypto_sym_create_session(struct rte_cryptodev *dev,
+		struct rte_crypto_cipher_params *cipher_setup_data,
+		struct rte_crypto_hash_params *hash_setup_data,
+		enum rte_crypto_operation_chain op_type)
+{
+	struct qat_session *session;
+	struct qat_pmd_private *internals = dev->data->dev_private;
+	enum icp_qat_hw_cipher_algo cipher_alg;
+	enum icp_qat_hw_auth_algo hash_alg;
+	enum icp_qat_hw_cipher_mode cipher_mode;
+	uint32_t digest_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (hash_setup_data == NULL || cipher_setup_data == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters - currently only "
+				"authenticated encryption supported");
+		return NULL;
+	}
+	switch (cipher_setup_data->algo) {
+	case RTE_CRYPTO_SYM_CIPHER_AES_CBC:
+		if (qat_alg_validate_aes_key(cipher_setup_data->key.length, &cipher_alg) != 0) {
+			PMD_DRV_LOG(ERR, "Invalid AES cipher key size");
+			return NULL;
+		}
+		cipher_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		break;
+	case RTE_CRYPTO_SYM_CIPHER_NULL:
+	case RTE_CRYPTO_SYM_CIPHER_3DES_ECB:
+	case RTE_CRYPTO_SYM_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_SYM_CIPHER_AES_ECB:
+	case RTE_CRYPTO_SYM_CIPHER_AES_CTR:
+	case RTE_CRYPTO_SYM_CIPHER_AES_GCM:
+	case RTE_CRYPTO_SYM_CIPHER_AES_CCM:
+	case RTE_CRYPTO_SYM_CIPHER_KASUMI_F8:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+						cipher_setup_data->algo);
+		return NULL;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+						cipher_setup_data->algo);
+		return NULL;
+	}
+	switch (hash_setup_data->algo) {
+	case RTE_CRYPTO_SYM_HASH_SHA1_HMAC:
+		hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA256_HMAC:
+		hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		break;
+	case RTE_CRYPTO_SYM_HASH_SHA512_HMAC:
+		hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		break;
+
+	case RTE_CRYPTO_SYM_HASH_NONE:
+	case RTE_CRYPTO_SYM_HASH_SHA1:
+	case RTE_CRYPTO_SYM_HASH_SHA256:
+	case RTE_CRYPTO_SYM_HASH_SHA512:
+	case RTE_CRYPTO_SYM_HASH_SHA224:
+	case RTE_CRYPTO_SYM_HASH_SHA224_HMAC:
+	case RTE_CRYPTO_SYM_HASH_SHA384:
+	case RTE_CRYPTO_SYM_HASH_SHA384_HMAC:
+	case RTE_CRYPTO_SYM_HASH_MD5:
+	case RTE_CRYPTO_SYM_HASH_MD5_HMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_XCBC_MAC:
+	case RTE_CRYPTO_SYM_HASH_AES_CCM:
+	case RTE_CRYPTO_SYM_HASH_AES_GCM:
+	case RTE_CRYPTO_SYM_HASH_KASUMI_F9:
+	case RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2:
+	case RTE_CRYPTO_SYM_HASH_AES_CMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_GMAC:
+	case RTE_CRYPTO_SYM_HASH_AES_CBC_MAC:
+	case RTE_CRYPTO_SYM_HASH_ZUC_EIA3:
+		PMD_DRV_LOG(ERR, "Crypto: Unsupported hash alg %u",
+				hash_setup_data->algo);
+		return NULL;
+	default:
+		PMD_DRV_LOG(ERR, "Crypto: Undefined Hash algo %u specified",
+				hash_setup_data->algo);
+		return NULL;
+	}
+
+	if (rte_mempool_get(internals->sess_mp, (void **)&session)) {
+		PMD_DRV_LOG(ERR, "Crypto: Failed to get session memory");
+		return NULL;
+	}
+
+	session->qat_cipher_alg = cipher_alg;
+	session->qat_hash_alg = hash_alg;
+	session->qat_mode = cipher_mode;
+	digest_size = hash_setup_data->digest_length;
+
+	if (cipher_setup_data->op == RTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT)
+		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+	else
+		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+
+	if (op_type == RTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER)
+		session->qat_cmd = ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+	else if (op_type == RTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH)
+		session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+	else {
+		PMD_DRV_LOG(ERR, "Crypto: Invalid operation chaining - "
+				"only authenticate encryption supported");
+		goto error_out;
+	}
+	qat_alg_aead_session_create_content_desc(session,
+		cipher_setup_data->key.data,
+		cipher_setup_data->key.length,
+		hash_setup_data->auth_key.data,
+		hash_setup_data->auth_key.length,
+		digest_size);
+	return (struct rte_cryptodev_session *)session;
+
+error_out:
+	rte_mempool_put(internals->sess_mp, session);
+	return NULL;
+}
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+		unsigned nb_objs, unsigned obj_cache_size, int socket_id)
+{
+	struct qat_pmd_private *internals = dev->data->dev_private;
+	uint16_t qat_session_size = RTE_ALIGN_CEIL(sizeof(struct qat_session), 8);
+
+	unsigned n = snprintf(internals->sess_mp_name,
+			sizeof(internals->sess_mp_name), "qat_pmd_%d_sess_mp",
+			dev->data->dev_id);
+
+	if (n > sizeof(internals->sess_mp_name)) {
+		PMD_DRV_LOG(ERR, "Unable to create unique name for session mempool");
+		return -ENOMEM;
+	}
+	internals->sess_mp = rte_mempool_lookup(internals->sess_mp_name);
+	if (internals->sess_mp != NULL) {
+		if (internals->sess_mp->elt_size != qat_session_size ||
+				internals->sess_mp->cache_size < obj_cache_size ||
+				internals->sess_mp->size < nb_objs) {
+
+			PMD_DRV_LOG(ERR, "%s mempool already exists with different "
+						"initialisation parameters",
+						internals->sess_mp_name);
+			return -ENOMEM;
+		}
+		return 0;
+	}
+
+	internals->sess_mp = rte_mempool_create(
+			internals->sess_mp_name,	/* mempool name */
+			nb_objs,			/* number of elements*/
+			qat_session_size,		/* element size*/
+			obj_cache_size, 		/* Cache size*/
+			0,				/* private data size */
+			NULL,				/* obj initialisation constructor */
+			NULL,				/* obj initialisation constructor argument */
+			qat_crypto_sessionbuf_init,	/* obj constructor */
+			NULL,				/* obj constructor argument */
+			socket_id,			/* socket id */
+			0);				/* flags */
+
+	if (internals->sess_mp == NULL) {
+		PMD_DRV_LOG(ERR, "%s mempool allocation failed",
+				internals->sess_mp_name);
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+uint16_t qat_crypto_pkt_tx_burst(void *qp, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t nb_pkts_sent = 0;
+	struct rte_mbuf **cur_tx_pkt = tx_pkts;
+	int ret = 0;
+
+	queue = &(tmp_qp->tx_q);
+	while (nb_pkts_sent != nb_pkts) {
+		if (rte_atomic16_add_return(&tmp_qp->inflights16, 1) > queue->max_inflights) {
+			rte_atomic16_sub(&tmp_qp->inflights16, 1);
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+		ret = qat_alg_write_mbuf_entry(*cur_tx_pkt,
+			(uint8_t *)queue->base_addr + queue->tail);
+		if (ret != 0) {
+			tmp_qp->stats.enqueue_err_count++;
+			if (nb_pkts_sent == 0)
+				return 0;
+			else
+				goto kick_tail;
+		}
+
+		queue->tail = adf_modulo(queue->tail +
+				queue->msg_size,
+				ADF_RING_SIZE_MODULO(queue->queue_size));
+		nb_pkts_sent++;
+		cur_tx_pkt++;
+	}
+kick_tail:
+	WRITE_CSR_RING_TAIL(tmp_qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue->tail);
+	tmp_qp->stats.enqueued_count += nb_pkts_sent;
+	return nb_pkts_sent;
+}
+
+uint16_t
+qat_crypto_pkt_rx_burst(void *qp, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct qat_queue *queue;
+	struct qat_qp *tmp_qp = (struct qat_qp *)qp;
+	uint32_t msg_counter = 0;
+	struct rte_mbuf *rx_mbuf;
+	struct icp_qat_fw_comn_resp *resp_msg;
+
+	queue = &(tmp_qp->rx_q);
+
+	resp_msg = (struct icp_qat_fw_comn_resp *)((uint8_t *)queue->base_addr + queue->head);
+	while (*(uint32_t *)resp_msg != ADF_RING_EMPTY_SIG && msg_counter != nb_pkts) {
+		rx_mbuf = (struct rte_mbuf *)(resp_msg->opaque_data);
+		if (ICP_QAT_FW_COMN_STATUS_FLAG_OK !=
+				ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(
+						resp_msg->comn_hdr.comn_status)) {
+			rx_mbuf->ol_flags |= PKT_RX_CRYPTO_DIGEST_BAD;
+		}
+		*(uint32_t *)resp_msg = ADF_RING_EMPTY_SIG;
+		queue->head = adf_modulo(queue->head +
+					queue->msg_size,
+					ADF_RING_SIZE_MODULO(queue->queue_size));
+		resp_msg = (struct icp_qat_fw_comn_resp *)((uint8_t *)queue->base_addr + queue->head);
+
+		*rx_pkts = rx_mbuf;
+		rx_pkts++;
+		msg_counter++;
+	}
+	if (msg_counter > 0) {
+		WRITE_CSR_RING_HEAD(tmp_qp->mmap_bar_addr,
+					queue->hw_bundle_number,
+					queue->hw_queue_number, queue->head);
+		rte_atomic16_sub(&tmp_qp->inflights16, msg_counter);
+		tmp_qp->stats.dequeued_count += msg_counter;
+	}
+	return msg_counter;
+}
+
+static inline int qat_alg_write_mbuf_entry(struct rte_mbuf *mbuf, uint8_t *out_msg)
+{
+	struct rte_crypto_op_data *rte_op_data = mbuf->crypto_op;
+	struct qat_session *ctx;
+	struct icp_qat_fw_la_cipher_req_params *cipher_param;
+	struct icp_qat_fw_la_auth_req_params *auth_param;
+	struct icp_qat_fw_la_bulk_req *qat_req;
+
+	if (unlikely(rte_op_data->type == RTE_CRYPTO_OP_SESSIONLESS)) {
+		PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented requests "
+				"mbuf (%p) is sessionless.", mbuf);
+		return -EINVAL;
+	}
+	ctx = (struct qat_session *)rte_op_data->session;
+	qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
+	*qat_req = ctx->fw_req;
+	qat_req->comn_mid.opaque_data = (uint64_t)mbuf;
+
+	/*
+	 * The following code assumes:
+	 * - single entry buffer.
+	 * - always in place.
+	 */
+	qat_req->comn_mid.dst_length = qat_req->comn_mid.src_length = mbuf->data_len;
+	qat_req->comn_mid.dest_data_addr = qat_req->comn_mid.src_data_addr
+							= rte_pktmbuf_mtophys(mbuf);
+
+	cipher_param = (void *)&qat_req->serv_specif_rqpars;
+	auth_param = (void *)((uint8_t *)cipher_param + sizeof(*cipher_param));
+
+	cipher_param->cipher_length = rte_op_data->data.to_cipher.length;
+	cipher_param->cipher_offset = rte_op_data->data.to_cipher.offset;
+	if (rte_op_data->iv.length &&
+		(rte_op_data->iv.length <= sizeof(cipher_param->u.cipher_IV_array))) {
+		rte_memcpy(cipher_param->u.cipher_IV_array, rte_op_data->iv.data,
+							rte_op_data->iv.length);
+	} else {
+		ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(qat_req->comn_hdr.serv_specif_flags,
+				ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+		cipher_param->u.s.cipher_IV_ptr = rte_op_data->iv.phys_addr;
+	}
+	if (rte_op_data->digest.phys_addr) {
+		ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(qat_req->comn_hdr.serv_specif_flags,
+					ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
+		auth_param->auth_res_addr = rte_op_data->digest.phys_addr;
+	}
+	auth_param->auth_off = rte_op_data->data.to_hash.offset;
+	auth_param->auth_len = rte_op_data->data.to_hash.length;
+	return 0;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
+{
+	uint32_t div = data >> shift;
+	uint32_t mult = div << shift;
+
+	return data - mult;
+}
+
+static void qat_crypto_sessionbuf_init(struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		 void *_s,
+		 __rte_unused unsigned i)
+{
+	struct qat_session *s = _s;
+
+	PMD_INIT_FUNC_TRACE();
+	s->cd_paddr = rte_mempool_virt2phy(mp, &s->cd);
+}
+
+int qat_dev_config(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+int qat_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+	return -ENOTSUP;
+}
+
+void qat_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+void qat_dev_close(__rte_unused struct rte_cryptodev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+}
+
+void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
+						struct rte_cryptodev_info *info)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (info != NULL) {
+		info->max_queue_pairs =
+				ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV;
+		info->dev_type = RTE_CRYPTODEV_QAT_PMD;
+	}
+}
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid stats ptr NULL");
+		return;
+	}
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		if (qp[i] == NULL) {
+			PMD_DRV_LOG(DEBUG, "Uninitialised queue pair");
+			continue;
+		}
+
+		stats->enqueued_count += qp[i]->stats.enqueued_count;
+		stats->dequeued_count += qp[i]->stats.enqueued_count;
+		stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp[i]->stats.enqueue_err_count;
+	}
+}
+
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev)
+{
+	int i;
+	struct qat_qp **qp = (struct qat_qp **)(dev->data->queue_pairs);
+
+	PMD_INIT_FUNC_TRACE();
+	for (i = 0; i < dev->data->nb_queue_pairs; i++)
+		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
+	PMD_DRV_LOG(DEBUG, "QAT crypto: stats cleared");
+}
+
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000..1be3f2f
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,99 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_CRYPTO_H_
+#define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#include <rte_memzone.h>
+
+/**
+ * Structure associated with each queue.
+ */
+struct qat_queue {
+	char		memz_name[RTE_MEMZONE_NAMESIZE];
+	void		*base_addr;		/* Base address */
+	phys_addr_t	base_phys_addr;		/* Queue physical address */
+	uint32_t	head;			/* Shadow copy of the head */
+	uint32_t	tail;			/* Shadow copy of the tail */
+	uint32_t	msg_size;
+	uint16_t	max_inflights;
+	uint32_t	queue_size;
+	uint8_t		hw_bundle_number;
+	uint8_t		hw_queue_number;	 /* HW queue aka ring offset on bundle */
+};
+
+struct qat_qp {
+	void			*mmap_bar_addr;
+	rte_atomic16_t		inflights16;
+	struct	qat_queue	tx_q;
+	struct	qat_queue	rx_q;
+	struct	rte_cryptodev_stats stats;
+} __rte_cache_aligned;
+
+/** private data structure for each QAT device */
+struct qat_pmd_private {
+	char sess_mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *sess_mp;
+};
+
+int qat_dev_config(struct rte_cryptodev *dev);
+int qat_dev_start(struct rte_cryptodev *dev);
+void qat_dev_stop(struct rte_cryptodev *dev);
+void qat_dev_close(struct rte_cryptodev *dev);
+void qat_dev_info_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_info *info);
+
+void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats);
+void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+void qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+int
+qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
+	unsigned nb_objs, unsigned obj_cache_size, int socket_id);
+struct rte_cryptodev_session *
+qat_crypto_sym_create_session(struct rte_cryptodev *dev,
+	struct rte_crypto_cipher_params *cipher_setup_data,
+	struct rte_crypto_hash_params *hash_setup_data,
+	enum rte_crypto_operation_chain op_type);
+void qat_crypto_sym_destroy_session(struct rte_cryptodev *dev __rte_unused,
+	struct rte_cryptodev_session *session);
+
+uint16_t qat_crypto_pkt_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t qat_crypto_pkt_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* _QAT_CRYPTO_H_ */
diff --git a/drivers/crypto/qat/qat_logs.h b/drivers/crypto/qat/qat_logs.h
new file mode 100644
index 0000000..04293e3
--- /dev/null
+++ b/drivers/crypto/qat/qat_logs.h
@@ -0,0 +1,78 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _QAT_LOGS_H_
+#define _QAT_LOGS_H_
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \
+		"PMD: %s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_QAT_DEBUG_DRIVER
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
new file mode 100644
index 0000000..57aa461
--- /dev/null
+++ b/drivers/crypto/qat/qat_qp.c
@@ -0,0 +1,372 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_cryptodev.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+
+#include "qat_logs.h"
+#include "qat_crypto.h"
+#include "adf_transport_access_macros.h"
+
+#define ADF_MAX_SYM_DESC			4096
+#define ADF_MIN_SYM_DESC			128
+#define ADF_SYM_TX_RING_DESC_SIZE		128
+#define ADF_SYM_RX_RING_DESC_SIZE		32
+#define ADF_SYM_TX_QUEUE_STARTOFF		2 /* Offset from bundle start to 1st Sym Tx queue */
+#define ADF_SYM_RX_QUEUE_STARTOFF		10
+#define ADF_ARB_REG_SLOT			0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+	uint32_t queue_size_bytes);
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t id, uint32_t nb_desc,
+	int socket_id);
+static int qat_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint32_t nb_desc, uint8_t desc_size,
+	int socket_id);
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+    uint32_t *queue_size_for_csr);
+static void adf_configure_queues(struct qat_qp *queue);
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr);
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr);
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *qp_name, uint32_t queue_size, int socket_id)
+{
+	const struct rte_memzone *mz;
+	unsigned memzone_flags = 0;
+	const struct rte_memseg *ms;
+
+	PMD_INIT_FUNC_TRACE();
+	mz = rte_memzone_lookup(qp_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			PMD_DRV_LOG(DEBUG, "re-use memzone already allocated for %s", qp_name);
+			return mz;
+		} else {
+			PMD_DRV_LOG(ERR, "Incompatible memzone already allocated %s, "
+					"size %u, socket %d. Requested size %u, socket %u",
+					qp_name, (uint32_t)mz->len, mz->socket_id,
+					queue_size, socket_id);
+			return NULL;
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					qp_name, queue_size, socket_id);
+	ms = rte_eal_get_physmem_layout();
+	switch (ms[0].hugepage_sz) {
+	case(RTE_PGSIZE_2M):
+		memzone_flags = RTE_MEMZONE_2MB;
+	break;
+	case(RTE_PGSIZE_1G):
+		memzone_flags = RTE_MEMZONE_1GB;
+	break;
+	case(RTE_PGSIZE_16M):
+		memzone_flags = RTE_MEMZONE_16MB;
+	break;
+	case(RTE_PGSIZE_16G):
+		memzone_flags = RTE_MEMZONE_16GB;
+	break;
+	default:
+		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+}
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return rte_memzone_reserve_bounded(qp_name, queue_size,
+		socket_id, 0, RTE_CACHE_LINE_SIZE, RTE_PGSIZE_2M);
+#else
+	return rte_memzone_reserve_aligned(qp_name, queue_size, socket_id,
+		memzone_flags, queue_size);
+#endif
+}
+
+int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp *qp;
+
+	PMD_INIT_FUNC_TRACE();
+	if ((qp_conf->nb_descriptors > ADF_MAX_SYM_DESC) ||
+		(qp_conf->nb_descriptors < ADF_MIN_SYM_DESC)) {
+		PMD_DRV_LOG(ERR, "Can't create qp for %u descriptors",
+				qp_conf->nb_descriptors);
+		return (-EINVAL);
+	}
+
+	if ((dev->pci_dev->mem_resource == NULL) ||
+		(dev->pci_dev->mem_resource[0].addr == NULL)) {
+		PMD_DRV_LOG(ERR, "Could not find VF config space (UIO driver attached?).");
+		return (-EINVAL);
+	}
+
+	if (queue_pair_id >= (ADF_NUM_SYM_QPS_PER_BUNDLE*ADF_NUM_BUNDLES_PER_DEV)) {
+		PMD_DRV_LOG(ERR, "qp_id %u invalid for this device", queue_pair_id);
+		return (-EINVAL);
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[queue_pair_id] != NULL) {
+		qat_crypto_sym_qp_release(dev, queue_pair_id);
+		dev->data->queue_pairs[queue_pair_id] = NULL;
+	}
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc("qat PMD qp queue", sizeof(*qp), RTE_CACHE_LINE_SIZE);
+	if (qp == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to alloc mem for qp struct");
+		return (-ENOMEM);
+	}
+	qp->mmap_bar_addr = dev->pci_dev->mem_resource[0].addr;
+	rte_atomic16_init(&qp->inflights16);
+
+	if (qat_tx_queue_create(dev, &(qp->tx_q),
+			queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_INIT_LOG(ERR, "Tx queue create failed "
+				"queue_pair_id=%u", queue_pair_id);
+		goto create_err;
+	}
+
+	if (qat_rx_queue_create(dev, &(qp->rx_q),
+			queue_pair_id, qp_conf->nb_descriptors, socket_id) != 0) {
+		PMD_DRV_LOG(ERR, "Rx queue create failed "
+				"queue_pair_id=%hu", queue_pair_id);
+		goto create_err;
+	}
+	dev->data->queue_pairs[queue_pair_id] = qp;
+	adf_configure_queues(qp);
+	adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr);
+	return 0;
+
+create_err:
+	rte_free(qp);
+	return (-EFAULT);
+}
+
+void qat_crypto_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_qp *qp = (struct qat_qp *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+	if (qp == NULL) {
+		PMD_DRV_LOG(DEBUG, "qp already freed");
+		return;
+	}
+
+	adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr);
+	rte_free(qp);
+}
+
+static int qat_tx_queue_create(struct rte_cryptodev *dev,
+	struct qat_queue *queue, uint8_t qp_id,
+	uint32_t nb_desc, int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_TX_QUEUE_STARTOFF;
+	PMD_DRV_LOG(DEBUG, "TX ring for %u msgs: qp_id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_TX_RING_DESC_SIZE, socket_id);
+}
+
+static int qat_rx_queue_create(struct rte_cryptodev *dev,
+		struct qat_queue *queue, uint8_t qp_id, uint32_t nb_desc,
+		int socket_id)
+{
+	PMD_INIT_FUNC_TRACE();
+	queue->hw_bundle_number = qp_id/ADF_NUM_SYM_QPS_PER_BUNDLE;
+	queue->hw_queue_number = (qp_id%ADF_NUM_SYM_QPS_PER_BUNDLE) +
+						ADF_SYM_RX_QUEUE_STARTOFF;
+
+	PMD_DRV_LOG(DEBUG, "RX ring for %u msgs: qp id %d, bundle %u, ring %u",
+		nb_desc, qp_id, queue->hw_bundle_number, queue->hw_queue_number);
+	return qat_queue_create(dev, queue, nb_desc,
+				ADF_SYM_RX_RING_DESC_SIZE, socket_id);
+}
+
+static int
+qat_queue_create(struct rte_cryptodev *dev, struct qat_queue *queue,
+		uint32_t nb_desc, uint8_t desc_size, int socket_id)
+{
+	uint64_t queue_base;
+	void *io_addr;
+	const struct rte_memzone *qp_mz;
+	uint32_t queue_size_bytes = nb_desc*desc_size;
+
+	PMD_INIT_FUNC_TRACE();
+	if (desc_size > ADF_MSG_SIZE_TO_BYTES(ADF_MAX_MSG_SIZE)) {
+		PMD_DRV_LOG(ERR, "Invalid descriptor size %d", desc_size);
+		return (-EINVAL);
+	}
+
+	/*
+	 * Allocate a memzone for the queue - create a unique name.
+	 */
+	snprintf(queue->memz_name, sizeof(queue->memz_name), "%s_%s_%d_%d_%d",
+		dev->driver->pci_drv.name, "qp_mem", dev->data->dev_id,
+		queue->hw_bundle_number, queue->hw_queue_number);
+	qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes, socket_id);
+	if (qp_mz == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate ring memzone");
+		return (-ENOMEM);
+	}
+
+	queue->base_addr = (char *)qp_mz->addr;
+	queue->base_phys_addr = qp_mz->phys_addr;
+	if (qat_qp_check_queue_alignment(queue->base_phys_addr, queue_size_bytes)) {
+		PMD_DRV_LOG(ERR, "Invalid alignment on queue create "
+					" 0x%"PRIx64"\n", queue->base_phys_addr);
+		return -EFAULT;
+	}
+
+	if (adf_verify_queue_size(desc_size, nb_desc, &(queue->queue_size)) != 0) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+
+	queue->max_inflights = ADF_MAX_INFLIGHTS(queue->queue_size,
+					ADF_BYTES_TO_MSG_SIZE(desc_size));
+	PMD_DRV_LOG(DEBUG, "RING size in CSR: %u, in bytes %u, nb msgs %u,"
+				" msg_size %u, max_inflights %u ",
+				queue->queue_size, queue_size_bytes,
+				nb_desc, desc_size, queue->max_inflights);
+
+	if (queue->max_inflights < 2) {
+		PMD_DRV_LOG(ERR, "Invalid num inflights");
+		return (-EINVAL);
+	}
+	queue->head = 0;
+	queue->tail = 0;
+	queue->msg_size = desc_size;
+
+	/*
+	 * Write an unused pattern to the queue memory.
+	 */
+	memset(queue->base_addr, 0x7F, queue_size_bytes);
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+					queue->queue_size);
+	io_addr = dev->pci_dev->mem_resource[0].addr;
+
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_base);
+	return 0;
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	PMD_INIT_FUNC_TRACE();
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return (-EINVAL);
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	PMD_INIT_FUNC_TRACE();
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	PMD_DRV_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return (-EINVAL);
+}
+
+static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	PMD_INIT_FUNC_TRACE();
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value ^= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+}
+
+static void adf_configure_queues(struct qat_qp *qp)
+{
+	uint32_t queue_config;
+	struct qat_queue *queue = &qp->tx_q;
+
+	PMD_INIT_FUNC_TRACE();
+	queue_config = BUILD_RING_CONFIG(queue->queue_size);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+
+	queue = &qp->rx_q;
+	queue_config =
+			BUILD_RESP_RING_CONFIG(queue->queue_size,
+					ADF_RING_NEAR_WATERMARK_512,
+					ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number,
+			queue->hw_queue_number, queue_config);
+}
diff --git a/drivers/crypto/qat/rte_pmd_qat_version.map b/drivers/crypto/qat/rte_pmd_qat_version.map
new file mode 100644
index 0000000..fcf5bb3
--- /dev/null
+++ b/drivers/crypto/qat/rte_pmd_qat_version.map
@@ -0,0 +1,5 @@
+DPDK_2.0 {
+	global:
+
+	local: *;
+};
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
new file mode 100644
index 0000000..b7e9c62
--- /dev/null
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -0,0 +1,128 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev.h>
+
+#include "qat_crypto.h"
+#include "qat_logs.h"
+
+static struct rte_cryptodev_ops crypto_qat_ops = {
+
+/* Device related operations */
+		.dev_configure		= qat_dev_config,
+		.dev_start		= qat_dev_start,
+		.dev_stop		= qat_dev_stop,
+		.dev_close		= qat_dev_close,
+		.dev_infos_get		= qat_dev_info_get,
+
+		.stats_get		= qat_crypto_sym_stats_get,
+		.stats_reset		= qat_crypto_sym_stats_reset,
+		.queue_pair_setup	= qat_crypto_sym_qp_setup,
+		.queue_pair_release	= qat_crypto_sym_qp_release,
+		.queue_pair_start	= NULL,
+		.queue_pair_stop	= NULL,
+		.queue_pair_count	= NULL,
+
+/* Crypto related operations */
+		.session_mp_create 	= qat_pmd_session_mempool_create,
+		.session_create		= qat_crypto_sym_create_session,
+		.session_destroy	= qat_crypto_sym_destroy_session
+};
+
+/*
+ * The set of PCI devices this driver supports
+ */
+
+static struct rte_pci_id pci_id_qat_map[] = {
+		{
+			.vendor_id = 0x8086,
+			.device_id = 0x0443,
+			.subsystem_vendor_id = PCI_ANY_ID,
+			.subsystem_device_id = PCI_ANY_ID
+		},
+		{.device_id = 0},
+};
+
+static int
+crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_drv,
+			struct rte_cryptodev *cryptodev)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "Found crypto device at %02x:%02x.%x",
+		cryptodev->pci_dev->addr.bus,
+		cryptodev->pci_dev->addr.devid,
+		cryptodev->pci_dev->addr.function);
+
+	cryptodev->dev_ops = &crypto_qat_ops;
+
+	cryptodev->enqueue_burst = qat_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = qat_crypto_pkt_rx_burst;
+
+	/* for secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(DEBUG, "Device already initialised by primary process");
+		return 0;
+	}
+
+	return 0;
+}
+
+static struct rte_cryptodev_driver rte_qat_pmd = {
+	{
+		.name = "rte_qat_pmd",
+		.id_table = pci_id_qat_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	},
+	.cryptodev_init = crypto_qat_dev_init,
+	.dev_private_size = sizeof(struct qat_pmd_private),
+};
+
+static int
+rte_qat_pmd_init(const char *name __rte_unused, const char *params __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	return rte_cryptodev_pmd_driver_register(&rte_qat_pmd, PMD_PDEV);
+}
+
+static struct rte_driver pmd_qat_drv = {
+	.type = PMD_PDEV,
+	.init = rte_qat_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(pmd_qat_drv);
+
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c7ee033..5502cc4 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -145,6 +145,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
 
+# QAT PMD has a dependancy on libcrypto (from openssl) for calculating HMAC precomputes
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
1.9.3

  parent reply	other threads:[~2015-08-20 13:59 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-20 14:07 [dpdk-dev] [PATCH 0/4] A proposed DPDK Crypto API and device framework Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 1/4] cryptodev: Initial DPDK Crypto APIs and device framework release Declan Doherty
2015-08-20 19:07   ` Neil Horman
2015-08-21 14:02     ` Declan Doherty
2015-09-15 16:36     ` [dpdk-dev] [PATCH] cryptodev: changes to crypto operation APIs to support non prescriptive chaining of crypto transforms in a crypto operation. app/test: updates to cryptodev unit tests to support new xform chaining APIs. aesni_mb_pmd: updates to device to support API changes Declan Doherty
2015-08-20 14:07 ` Declan Doherty [this message]
2015-08-20 14:07 ` [dpdk-dev] [PATCH 3/4] aesni_mb_pmd: Initial implementation of multi buffer based crypto device Declan Doherty
2015-08-20 14:07 ` [dpdk-dev] [PATCH 4/4] app/test: add cryptodev unit and performance tests Declan Doherty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1440079643-5437-3-git-send-email-declan.doherty@intel.com \
    --to=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).