DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] pmd/snow3g: add new SNOW 3G SW PMD
@ 2016-01-29 14:15 Pablo de Lara
  2016-03-07 14:07 ` [dpdk-dev] [PATCH v2] " Pablo de Lara
  0 siblings, 1 reply; 9+ messages in thread
From: Pablo de Lara @ 2016-01-29 14:15 UTC (permalink / raw)
  To: dev

Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms SNOW 3G UEA2 and UIA2
in software.

This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
- RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2

The SNOW 3G hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library. For library download and build instructions,
see the documentation included (doc/guides/cryptodevs/snow3g.rst)

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 MAINTAINERS                                      |   4 +
 config/common_linuxapp                           |  10 +-
 doc/guides/cryptodevs/index.rst                  |   1 +
 doc/guides/cryptodevs/snow3g.rst                 |  69 +++
 doc/guides/rel_notes/release_2_3.rst             |   4 +
 drivers/crypto/Makefile                          |   3 +-
 drivers/crypto/snow3g/Makefile                   |  64 +++
 drivers/crypto/snow3g/rte_pmd_snow3g_version.map |   3 +
 drivers/crypto/snow3g/rte_snow3g_pmd.c           | 508 +++++++++++++++++++++++
 drivers/crypto/snow3g/rte_snow3g_pmd_ops.c       | 291 +++++++++++++
 drivers/crypto/snow3g/rte_snow3g_pmd_private.h   | 107 +++++
 lib/librte_cryptodev/rte_crypto.h                |   4 +
 lib/librte_cryptodev/rte_cryptodev.h             |   5 +-
 mk/rte.app.mk                                    |   6 +-
 14 files changed, 1075 insertions(+), 4 deletions(-)
 create mode 100644 doc/guides/cryptodevs/snow3g.rst
 create mode 100644 drivers/crypto/snow3g/Makefile
 create mode 100644 drivers/crypto/snow3g/rte_pmd_snow3g_version.map
 create mode 100644 drivers/crypto/snow3g/rte_snow3g_pmd.c
 create mode 100644 drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
 create mode 100644 drivers/crypto/snow3g/rte_snow3g_pmd_private.h

diff --git a/MAINTAINERS b/MAINTAINERS
index b90aeea..69d27a9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -355,6 +355,10 @@ F: drivers/crypto/aesni_mb/
 Intel QuickAssist
 F: drivers/crypto/qat/
 
+SNOW 3G PMD
+M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
+F: drivers/crypto/snow3g
+
 
 Packet processing
 -----------------
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 74bc515..49aa274 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -356,6 +356,14 @@ CONFIG_RTE_AESNI_MB_PMD_MAX_NB_QUEUE_PAIRS=8
 CONFIG_RTE_AESNI_MB_PMD_MAX_NB_SESSIONS=2048
 
 #
+# Compile PMD for SNOW 3G device
+#
+CONFIG_RTE_LIBRTE_PMD_SNOW3G=n
+CONFIG_RTE_LIBRTE_PMD_SNOW3G_DEBUG=n
+CONFIG_RTE_SNOW3G_PMD_MAX_NB_QUEUE_PAIRS=8
+CONFIG_RTE_SNOW3G_PMD_MAX_NB_SESSIONS=2048
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 16a5f4a..071e7d2 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -36,4 +36,5 @@ Crypto Device Drivers
     :numbered:
 
     aesni_mb
+    snow3g
     qat
diff --git a/doc/guides/cryptodevs/snow3g.rst b/doc/guides/cryptodevs/snow3g.rst
new file mode 100644
index 0000000..9e81eeb
--- /dev/null
+++ b/doc/guides/cryptodevs/snow3g.rst
@@ -0,0 +1,69 @@
+..  BSD LICENSE
+    Copyright(c) 2016 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+SNOW 3G Crypto Poll Mode Driver
+==============================
+
+The SNOW 3G PMD (**librte_pmd_snow3g**) provides poll mode crypto driver
+support for utilizing Intel Libsso library, which implements F8 and F9 functions
+for SNOW 3G UEA2 cipher and UIA2 hash algorithms.
+
+Features
+--------
+
+SNOW 3G PMD has support for:
+
+Cipher algorithm:
+
+* RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
+
+Hash algorithm:
+
+* RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2
+
+Limitations
+-----------
+
+* Chained mbufs are not supported.
+
+Installation
+------------
+
+To build DPDK with the SNOW3G_PMD the user is required to get
+the export controlled libsso library, sending a request to
+`DPDKUser_software_access@intel.com`, and compile it
+on their user system before building DPDK:
+
+.. code-block:: console
+
+	make -f Makefile_snow3g
+
+The environmental variable LIBSSO_PATH must be exported with the path
+where you extracted and built the libsso library and finally set
+CONFIG_RTE_LIBRTE_SNOW3G=y in config/common_linuxapp.
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index 99de186..dcd8274 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -3,6 +3,10 @@ DPDK Release 2.3
 
 New Features
 ------------
+* **Added SNOW 3G SW PMD**
+
+  A new Crypto PMD has been added, which provides SNOW 3G UEA2 ciphering
+  and SNOW 3G UIA2 hashing.
 
 
 Resolved Issues
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index d07ee96..0636960 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -33,6 +33,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 
 include $(RTE_SDK)/mk/rte.sharelib.mk
 include $(RTE_SDK)/mk/rte.subdir.mk
\ No newline at end of file
diff --git a/drivers/crypto/snow3g/Makefile b/drivers/crypto/snow3g/Makefile
new file mode 100644
index 0000000..ee58270
--- /dev/null
+++ b/drivers/crypto/snow3g/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(LIBSSO_PATH),)
+$(error "Please define LIBSSO_PATH environment variable")
+endif
+
+# library name
+LIB = librte_pmd_snow3g.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_snow3g_version.map
+
+# external library include paths
+CFLAGS += -I$(LIBSSO_PATH)
+CFLAGS += -I$(LIBSSO_PATH)/include
+CFLAGS += -I$(LIBSSO_PATH)/build
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += rte_snow3g_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += rte_snow3g_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
new file mode 100644
index 0000000..3871202
--- /dev/null
+++ b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
@@ -0,0 +1,3 @@
+DPDK_2.3 {
+	local: *;
+};
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
new file mode 100644
index 0000000..0579614
--- /dev/null
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -0,0 +1,508 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_mbuf_offload.h>
+
+#include "rte_snow3g_pmd_private.h"
+
+#define SNOW3G_MAX_BURST 16
+
+/**
+ * Global static parameter used to create a unique name for each SNOW 3G
+ * crypto device.
+ */
+static unsigned unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", CRYPTODEV_NAME_SNOW3G_PMD,
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+/** Get xform chain order. */
+static enum snow3g_operation
+snow3g_get_mode(const struct rte_crypto_xform *xform)
+{
+	if (xform == NULL)
+		return SNOW3G_OP_NOT_SUPPORTED;
+
+	if (xform->next)
+		if (xform->next->next != NULL)
+			return SNOW3G_OP_NOT_SUPPORTED;
+
+	if (xform->type == RTE_CRYPTO_XFORM_AUTH) {
+		if (xform->next == NULL)
+			return SNOW3G_OP_ONLY_AUTH;
+		else if (xform->next->type == RTE_CRYPTO_XFORM_CIPHER)
+			return SNOW3G_OP_AUTH_CIPHER;
+		else
+			return SNOW3G_OP_NOT_SUPPORTED;
+	}
+
+	if (xform->type == RTE_CRYPTO_XFORM_CIPHER) {
+		if (xform->next == NULL)
+			return SNOW3G_OP_ONLY_CIPHER;
+		else if (xform->next->type == RTE_CRYPTO_XFORM_AUTH)
+			return SNOW3G_OP_CIPHER_AUTH;
+		else
+			return SNOW3G_OP_NOT_SUPPORTED;
+	}
+
+	return SNOW3G_OP_NOT_SUPPORTED;
+}
+
+
+/** Parse crypto xform chain and set private session parameters. */
+int
+snow3g_set_session_parameters(struct snow3g_session *sess,
+		const struct rte_crypto_xform *xform)
+{
+	const struct rte_crypto_xform *auth_xform = NULL;
+	const struct rte_crypto_xform *cipher_xform = NULL;
+	int mode;
+
+	/* Select Crypto operation - hash then cipher / cipher then hash */
+	mode = snow3g_get_mode(xform);
+
+	switch (mode) {
+	case SNOW3G_OP_CIPHER_AUTH:
+		auth_xform = xform->next;
+
+		/* Fall-through */
+	case SNOW3G_OP_ONLY_CIPHER:
+		cipher_xform = xform;
+		break;
+	case SNOW3G_OP_AUTH_CIPHER:
+		cipher_xform = xform->next;
+		/* Fall-through */
+	case SNOW3G_OP_ONLY_AUTH:
+		auth_xform = xform;
+	}
+
+	if (mode == SNOW3G_OP_NOT_SUPPORTED) {
+		SNOW3G_LOG_ERR("Unsupported operation chain order parameter");
+		return -EINVAL;
+	}
+
+	if (cipher_xform) {
+		/* Only SNOW 3G UEA2 supported */
+		if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_SNOW3G_UEA2)
+			return -EINVAL;
+		/* Initialize key */
+		sso_snow3g_init_key_sched(xform->cipher.key.data,
+				&sess->pKeySched_cipher);
+	}
+
+	if (auth_xform) {
+		/* Only SNOW 3G UIA2 supported */
+		if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_SNOW3G_UIA2)
+			return -EINVAL;
+		sess->auth_op = auth_xform->auth.op;
+		/* Initialize key */
+		sso_snow3g_init_key_sched(xform->auth.key.data,
+				&sess->pKeySched_hash);
+	}
+
+
+	sess->op = mode;
+
+	return 0;
+}
+
+/** Get SNOW 3G session. */
+static struct snow3g_session *
+snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *crypto_op)
+{
+	struct snow3g_session *sess;
+
+	if (crypto_op->type == RTE_CRYPTO_OP_WITH_SESSION) {
+		if (unlikely(crypto_op->session->type !=
+				RTE_CRYPTODEV_SNOW3G_PMD))
+			return NULL;
+
+		sess = (struct snow3g_session *)crypto_op->session->_private;
+	} else  {
+		struct rte_cryptodev_session *c_sess = NULL;
+
+		if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
+			return NULL;
+
+		sess = (struct snow3g_session *)c_sess->_private;
+
+		if (unlikely(snow3g_set_session_parameters(sess,
+				crypto_op->xform) != 0))
+			return NULL;
+	}
+
+	return sess;
+}
+
+/** Encrypt/decrypt mbufs with same cipher key. */
+static void
+process_snow3g_cipher_op(struct rte_mbuf **mbufs,
+		struct rte_crypto_op **c_op, struct snow3g_session *session,
+		uint8_t num_mbufs)
+{
+	unsigned i;
+	uint8_t *src[SNOW3G_MAX_BURST], *dst[SNOW3G_MAX_BURST];
+	uint8_t *IV[SNOW3G_MAX_BURST];
+	uint32_t num_bytes[SNOW3G_MAX_BURST];
+
+	for (i = 0; i < num_mbufs; i++) {
+		src[i] = rte_pktmbuf_mtod(mbufs[i], uint8_t *) +
+				c_op[i]->data.to_cipher.offset;
+		dst[i] = c_op[i]->dst.m ?
+				rte_pktmbuf_mtod(c_op[i]->dst.m, uint8_t *) +
+				c_op[i]->dst.offset :
+				rte_pktmbuf_mtod(mbufs[i], uint8_t *) +
+				c_op[i]->data.to_cipher.offset;
+		IV[i] = c_op[i]->iv.data;
+		num_bytes[i] = c_op[i]->data.to_cipher.length;
+	}
+
+	sso_snow3g_f8_n_buffer(&session->pKeySched_cipher, IV, src, dst,
+			num_bytes, num_mbufs);
+}
+
+/** Generate/verify hash from mbufs with same hash key. */
+static void
+process_snow3g_hash_op(struct rte_mbuf **mbufs,
+		struct rte_crypto_op **c_op, struct snow3g_session *session,
+		uint8_t num_mbufs)
+{
+	unsigned i;
+	uint8_t *src, *dst;
+	uint32_t length_in_bits;
+
+	for (i = 0; i < num_mbufs; i++) {
+		length_in_bits = c_op[i]->data.to_hash.length * 8;
+
+		src = rte_pktmbuf_mtod(mbufs[i], uint8_t *) +
+				c_op[i]->data.to_hash.offset;
+
+		if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+			dst = (uint8_t *)rte_pktmbuf_append(mbufs[i],
+					c_op[i]->digest.length);
+
+			sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
+					c_op[i]->iv.data, src, length_in_bits,
+					dst);
+			/* Verify digest. */
+			if (memcmp(dst, c_op[i]->digest.data,
+					c_op[i]->digest.length) != 0)
+				c_op[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+
+			/* Trim area used for digest from mbuf. */
+			rte_pktmbuf_trim(mbufs[i], c_op[i]->digest.length);
+		} else  {
+			dst = c_op[i]->digest.data;
+
+			sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
+					c_op[i]->additional_auth.data, src,
+					length_in_bits, dst);
+		}
+	}
+}
+
+/** Process a batch of mbufs which shares the same session. */
+static int
+process_bufs(struct rte_mbuf **mbufs,
+		struct rte_crypto_op **c_op, struct snow3g_session *session,
+		struct snow3g_qp *qp, uint8_t num_mbufs)
+{
+	unsigned i;
+	unsigned processed_mbufs = num_mbufs;
+
+	switch (session->op) {
+	case SNOW3G_OP_ONLY_CIPHER:
+		process_snow3g_cipher_op(mbufs, c_op,
+				session, num_mbufs);
+		break;
+	case SNOW3G_OP_ONLY_AUTH:
+		process_snow3g_hash_op(mbufs, c_op, session,
+				num_mbufs);
+		break;
+	case SNOW3G_OP_CIPHER_AUTH:
+		process_snow3g_cipher_op(mbufs, c_op, session,
+				num_mbufs);
+		process_snow3g_hash_op(mbufs, c_op, session,
+				num_mbufs);
+		break;
+	case SNOW3G_OP_AUTH_CIPHER:
+		process_snow3g_hash_op(mbufs, c_op, session,
+				num_mbufs);
+		process_snow3g_cipher_op(mbufs, c_op, session,
+				num_mbufs);
+		break;
+	default:
+		/* Operation not supported. */
+		processed_mbufs = 0;
+	}
+
+	for (i = 0; i < num_mbufs; i++) {
+		/* Free session if a session-less crypto op. */
+		if (c_op[i]->type == RTE_CRYPTO_OP_SESSIONLESS) {
+			rte_mempool_put(qp->sess_mp, c_op[i]->session);
+			c_op[i]->session = NULL;
+		}
+	}
+
+	return processed_mbufs;
+}
+
+static uint16_t
+snow3g_pmd_enqueue_burst(void *queue_pair, struct rte_mbuf **bufs,
+		uint16_t nb_bufs)
+{
+	struct rte_crypto_op *c_op[SNOW3G_MAX_BURST];
+	struct rte_mbuf_offload *curr_ol;
+	struct rte_crypto_op *curr_c_op;
+
+	struct snow3g_session *prev_sess = NULL, *curr_sess = NULL;
+	struct snow3g_qp *qp = queue_pair;
+	unsigned i, n;
+	unsigned buf_burst_index = 0;
+	unsigned burst_size = 0;
+	uint16_t enqueued_pkts = 0;
+
+	for (i = 0; i < nb_bufs; i++) {
+		curr_ol = rte_pktmbuf_offload_get(bufs[i],
+				RTE_PKTMBUF_OL_CRYPTO);
+		if (unlikely(curr_ol == NULL)) {
+			qp->qp_stats.enqueue_err_count++;
+			return enqueued_pkts;
+		}
+
+		curr_c_op = &curr_ol->op.crypto;
+
+		/* Set status as successful by default. */
+		curr_c_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		/* Sanity checks. */
+		if (curr_c_op->iv.length != 16 && curr_c_op->iv.length != 0) {
+			curr_c_op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			SNOW3G_LOG_ERR("iv");
+			return enqueued_pkts;
+		}
+
+		if (curr_c_op->additional_auth.length != 16 &&
+				curr_c_op->additional_auth.length != 0) {
+			curr_c_op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			SNOW3G_LOG_ERR("iv");
+			return enqueued_pkts;
+		}
+
+		if (curr_c_op->digest.length != 4 &&
+			curr_c_op->digest.length != 0) {
+			curr_c_op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+			SNOW3G_LOG_ERR("digest");
+			return enqueued_pkts;
+		}
+
+		curr_sess = snow3g_get_session(qp, curr_c_op);
+		if (unlikely(curr_sess == NULL ||
+				curr_sess->op == SNOW3G_OP_NOT_SUPPORTED)) {
+			curr_c_op->status =
+					RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+			qp->qp_stats.enqueue_err_count++;
+			return enqueued_pkts;
+		}
+
+		/* Batch mbufs that share the same session. */
+		if (prev_sess == NULL) {
+			prev_sess = curr_sess;
+			c_op[burst_size++] = curr_c_op;
+		} else if (curr_sess == prev_sess) {
+			c_op[burst_size++] = curr_c_op;
+			/*
+			 * When there are enough mbufs to process in a batch,
+			 * process them, and start a new batch.
+			 */
+			if (burst_size == SNOW3G_MAX_BURST) {
+				if (process_bufs(&bufs[buf_burst_index], c_op,
+						prev_sess, qp, burst_size) == 0)
+					return enqueued_pkts;
+				n = rte_ring_enqueue_burst(qp->processed_pkts,
+						(void **)&bufs[buf_burst_index],
+						burst_size);
+				qp->qp_stats.enqueued_count += n;
+				enqueued_pkts += n;
+				if (n < burst_size)
+					return enqueued_pkts;
+				burst_size = 0;
+				buf_burst_index = i + 1;
+
+				prev_sess = NULL;
+			}
+		} else {
+			/*
+			 * Different session, process the mbufs
+			 * of the previous session.
+			 */
+			if (process_bufs(&bufs[buf_burst_index], c_op,
+					prev_sess, qp, burst_size) == 0)
+				return enqueued_pkts;
+			n = rte_ring_enqueue_burst(qp->processed_pkts,
+					(void **)&bufs[i], burst_size);
+			qp->qp_stats.enqueued_count += n;
+			enqueued_pkts += n;
+			if (n < burst_size)
+				return enqueued_pkts;
+			burst_size = 0;
+
+			prev_sess = curr_sess;
+			buf_burst_index = i + 1;
+			c_op[burst_size++] = curr_c_op;
+		}
+	}
+
+	if (burst_size != 0) {
+		/* Process the mbufs of the last session. */
+		if (process_bufs(&bufs[buf_burst_index], c_op, prev_sess, qp,
+				burst_size) == 0)
+			return enqueued_pkts;
+		n = rte_ring_enqueue_burst(qp->processed_pkts,
+				(void **)&bufs[buf_burst_index], burst_size);
+
+		qp->qp_stats.enqueued_count += n;
+		enqueued_pkts += n;
+	}
+
+	return enqueued_pkts;
+}
+
+static uint16_t
+snow3g_pmd_dequeue_burst(void *queue_pair,
+		struct rte_mbuf **bufs,	uint16_t nb_bufs)
+{
+	struct snow3g_qp *qp = queue_pair;
+
+	unsigned nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)bufs, nb_bufs);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int cryptodev_snow3g_uninit(const char *name);
+
+static int
+cryptodev_snow3g_create(const char *name, unsigned socket_id)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct snow3g_private *internals;
+
+	/* Create a unique device name. */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		SNOW3G_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct snow3g_private), socket_id);
+	if (dev == NULL) {
+		SNOW3G_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+	dev->dev_ops = rte_snow3g_pmd_ops;
+
+	/* Register RX/TX burst functions for data path. */
+	dev->dequeue_burst = snow3g_pmd_dequeue_burst;
+	dev->enqueue_burst = snow3g_pmd_enqueue_burst;
+
+	internals = dev->data->dev_private;
+
+	internals->max_nb_queue_pairs = RTE_SNOW3G_PMD_MAX_NB_QUEUE_PAIRS;
+	internals->max_nb_sessions = RTE_SNOW3G_PMD_MAX_NB_SESSIONS;
+
+	return dev->data->dev_id;
+init_error:
+	SNOW3G_LOG_ERR("driver %s: cryptodev_snow3g_create failed", name);
+
+	cryptodev_snow3g_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+
+static int
+cryptodev_snow3g_init(const char *name,
+		const char *params __rte_unused)
+{
+	RTE_LOG(INFO, PMD, "Initialising %s\n", name);
+
+	return cryptodev_snow3g_create(name, rte_socket_id());
+}
+
+static int
+cryptodev_snow3g_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD, "Closing SNOW3G crypto device %s"
+			" on numa socket %u\n",
+			name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_snow3g_pmd_drv = {
+	.name = CRYPTODEV_NAME_SNOW3G_PMD,
+	.type = PMD_VDEV,
+	.init = cryptodev_snow3g_init,
+	.uninit = cryptodev_snow3g_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_snow3g_pmd_drv);
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
new file mode 100644
index 0000000..b902018
--- /dev/null
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -0,0 +1,291 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_snow3g_pmd_private.h"
+
+/** Configure device */
+static int
+snow3g_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+snow3g_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+snow3g_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+snow3g_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+snow3g_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct snow3g_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+snow3g_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct snow3g_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+
+/** Get device info */
+static void
+snow3g_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct snow3g_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
+		dev_info->max_nb_sessions = internals->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+snow3g_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+snow3g_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct snow3g_qp *qp)
+{
+	unsigned n = snprintf(qp->name, sizeof(qp->name),
+			"snow3g_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+snow3g_pmd_qp_create_processed_pkts_ring(struct snow3g_qp *qp,
+		unsigned ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			SNOW3G_LOG_INFO("Reusing existing ring %s"
+					" for processed packets",
+					 qp->name);
+			return r;
+		}
+
+		SNOW3G_LOG_ERR("Unable to reuse existing ring %s"
+				" for processed packets",
+				 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+/** Setup a queue pair */
+static int
+snow3g_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct snow3g_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		snow3g_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("SNOW3G PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (snow3g_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_pkts = snow3g_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+snow3g_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+snow3g_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+snow3g_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the SNOW 3G session structure */
+static unsigned
+snow3g_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct snow3g_session);
+}
+
+/** Configure a SNOW 3G session from a crypto xform chain */
+static void *
+snow3g_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
+		struct rte_crypto_xform *xform,	void *sess)
+{
+	if (unlikely(sess == NULL)) {
+		SNOW3G_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (snow3g_set_session_parameters(sess, xform) != 0) {
+		SNOW3G_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+snow3g_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess)
+		memset(sess, 0, sizeof(struct snow3g_session));
+}
+
+struct rte_cryptodev_ops snow3g_pmd_ops = {
+		.dev_configure      = snow3g_pmd_config,
+		.dev_start          = snow3g_pmd_start,
+		.dev_stop           = snow3g_pmd_stop,
+		.dev_close          = snow3g_pmd_close,
+
+		.stats_get          = snow3g_pmd_stats_get,
+		.stats_reset        = snow3g_pmd_stats_reset,
+
+		.dev_infos_get      = snow3g_pmd_info_get,
+
+		.queue_pair_setup   = snow3g_pmd_qp_setup,
+		.queue_pair_release = snow3g_pmd_qp_release,
+		.queue_pair_start   = snow3g_pmd_qp_start,
+		.queue_pair_stop    = snow3g_pmd_qp_stop,
+		.queue_pair_count   = snow3g_pmd_qp_count,
+
+		.session_get_size   = snow3g_pmd_session_get_size,
+		.session_configure  = snow3g_pmd_session_configure,
+		.session_clear      = snow3g_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_snow3g_pmd_ops = &snow3g_pmd_ops;
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
new file mode 100644
index 0000000..d09ed9d
--- /dev/null
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
@@ -0,0 +1,107 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SNOW3G_PMD_PRIVATE_H_
+#define _RTE_SNOW3G_PMD_PRIVATE_H_
+
+#include <sso_snow3g.h>
+
+#define SNOW3G_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			CRYPTODEV_NAME_SNOW3G_PMD, \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_SNOW3G_DEBUG
+#define SNOW3G_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_SNOW3G_PMD, \
+			__func__, __LINE__, ## args)
+
+#define SNOW3G_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			CRYPTODEV_NAME_SNOW3G_PMD, \
+			__func__, __LINE__, ## args)
+#else
+#define SNOW3G_LOG_INFO(fmt, args...)
+#define SNOW3G_LOG_DBG(fmt, args...)
+#endif
+
+/** private data structure for each virtual SNOW 3G device */
+struct snow3g_private {
+	unsigned max_nb_queue_pairs;
+	/**< Max number of queue pairs supported by device */
+	unsigned max_nb_sessions;
+	/**< Max number of sessions supported by device */
+};
+
+/** SNOW 3G buffer queue pair */
+struct snow3g_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+enum snow3g_operation {
+	SNOW3G_OP_ONLY_CIPHER,
+	SNOW3G_OP_ONLY_AUTH,
+	SNOW3G_OP_CIPHER_AUTH,
+	SNOW3G_OP_AUTH_CIPHER,
+	SNOW3G_OP_NOT_SUPPORTED
+};
+
+/** SNOW 3G private session structure */
+struct snow3g_session {
+	enum snow3g_operation op;
+	enum rte_crypto_auth_operation auth_op;
+	sso_snow3g_key_schedule_t pKeySched_cipher;
+	sso_snow3g_key_schedule_t pKeySched_hash;
+} __rte_cache_aligned;
+
+
+extern int
+snow3g_set_session_parameters(struct snow3g_session *sess,
+		const struct rte_crypto_xform *xform);
+
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_snow3g_pmd_ops;
+
+
+
+#endif /* _RTE_SNOW3G_PMD_PRIVATE_H_ */
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 42343a8..1c93e78 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -353,6 +353,10 @@ enum rte_crypto_op_status {
 	/**< Operation is enqueued on device */
 	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
 	/**< Authentication verification failed */
+	RTE_CRYPTO_OP_STATUS_INVALID_SESSION,
+	/**< Operation failed due to invalid session args or
+	 * if in session-less mode failed to created session
+	 */
 	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
 	/**< Operation failed due to invalid arguments in request */
 	RTE_CRYPTO_OP_STATUS_ERROR,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 892375d..36e3245 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -1,6 +1,6 @@
 /*-
  *
- *   Copyright(c) 2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -57,6 +57,8 @@ extern "C" {
 /**< Null crypto PMD device name */
 #define CRYPTODEV_NAME_AESNI_MB_PMD	("cryptodev_aesni_mb_pmd")
 /**< AES-NI Multi buffer PMD device name */
+#define CRYPTODEV_NAME_SNOW3G_PMD	("cryptodev_snow3g_pmd")
+/**< SNOW 3G PMD device name */
 #define CRYPTODEV_NAME_QAT_PMD		("cryptodev_qat_pmd")
 /**< Intel QAT PMD device name */
 
@@ -64,6 +66,7 @@ extern "C" {
 enum rte_cryptodev_type {
 	RTE_CRYPTODEV_NULL_PMD = 1,	/**< Null crypto PMD */
 	RTE_CRYPTODEV_AESNI_MB_PMD,	/**< AES-NI multi buffer PMD */
+	RTE_CRYPTODEV_SNOW3G_PMD,	/**< SNOW 3G PMD */
 	RTE_CRYPTODEV_QAT_PMD,		/**< QAT PMD */
 };
 
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8ecab41..da1d01e 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
 #   Copyright(c) 2014-2015 6WIND S.A.
 #   All rights reserved.
 #
@@ -159,6 +159,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
 
+# SNOW3G PMD is dependent on the LIBSSO library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -lrte_pmd_snow3g
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -L$(LIBSSO_PATH)/build -lsso
+
 endif # ! $(CONFIG_RTE_BUILD_SHARED_LIB)
 
 endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-- 
2.5.0

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-03-10 23:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-29 14:15 [dpdk-dev] [PATCH] pmd/snow3g: add new SNOW 3G SW PMD Pablo de Lara
2016-03-07 14:07 ` [dpdk-dev] [PATCH v2] " Pablo de Lara
2016-03-07 19:48   ` [dpdk-dev] [PATCH v3] " Pablo de Lara
2016-03-08 14:10     ` Jain, Deepak K
2016-03-10 16:33     ` [dpdk-dev] [PATCH v4] " Pablo de Lara
2016-03-10 16:54       ` Jain, Deepak K
2016-03-10 23:00         ` Thomas Monjalon
2016-03-10 22:56       ` Thomas Monjalon
2016-03-07 19:48   ` [dpdk-dev] [PATCH v2] " De Lara Guarch, Pablo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).