DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
@ 2016-12-02 14:15 Fan Zhang
  2016-12-02 14:31 ` Thomas Monjalon
                   ` (2 more replies)
  0 siblings, 3 replies; 42+ messages in thread
From: Fan Zhang @ 2016-12-02 14:15 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, roy.fan.zhang

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

The scheduler PMD can be used to fill the throughput gap between the
physical core and the existing cryptodevs to increase the overall
performance. For example, if a physical core has higher crypto op
processing rate than a cryptodev, the scheduler PMD can be introduced to
attach more than one cryptodevs.

This initial implementation is limited to supporting the following
scheduling modes:

- CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
    slave cryptodevs, to set this mode, the scheduler should have been
    attached 1 or more software cryptodevs.

- CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
    slave cryptodevs (QAT), to set this mode, the scheduler should have
    been attached 1 or more QATs.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
Scheduler PMD shares same EAL commandline options as other cryptodevs.
In addition, one extra option "enable_reorder" exists. When it is set to
"yes", the dequeued crypto op reorder will take place. This feature can
be disabled by filling "no" in the "enable_reorder" option. For example,
the following EAL commandline fragment creates a scheduler PMD with
crypto op reordering feature enabled:

... --vdev "crypto_scheduler_pmd,enable_reorder=yes" ...

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 config/common_base                                 |   6 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/scheduler/Makefile                  |  64 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 387 +++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h |  90 ++++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |   8 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 475 +++++++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 335 +++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 137 ++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   2 +
 mk/rte.app.mk                                      |   3 +-
 11 files changed, 1507 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h

diff --git a/config/common_base b/config/common_base
index 4bff83a..79d120d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -400,6 +400,12 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for ZUC device
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..cdd3c94 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..d8e1ff5
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..e04596c
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,387 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_jhash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+static int
+request_qp(uint8_t dev_id) {
+	struct rte_cryptodev_info dev_info;
+	struct slave_info key = {dev_id, 0};
+	uint16_t i;
+
+	if (!dev_qp_map) {
+		struct rte_hash_parameters hash_param = {0};
+
+		hash_param.entries = 1024;
+		hash_param.key_len = sizeof(key);
+		hash_param.hash_func = rte_jhash;
+		hash_param.hash_func_init_val = 0;
+		hash_param.socket_id = SOCKET_ID_ANY;
+
+		dev_qp_map = rte_hash_create(&hash_param);
+		if (!dev_qp_map) {
+			CS_LOG_ERR("not enough memory to create hash table");
+			return -ENOMEM;
+		}
+	}
+
+	rte_cryptodev_info_get(dev_id, &dev_info);
+
+	for (i = 0; i < dev_info.max_nb_queue_pairs; i++) {
+		key.qp_id = i;
+
+		if (rte_hash_lookup_data(dev_qp_map, (void *)&key,
+			NULL) == 0)
+			continue;
+
+		if (rte_hash_add_key(dev_qp_map, &key) < 0) {
+			CS_LOG_ERR("not enough memory to insert hash "
+				"entry");
+			return -ENOMEM;
+		}
+
+		break;
+	}
+
+	if (i == dev_info.max_nb_queue_pairs) {
+		CS_LOG_ERR("all queue pairs of cdev %u has already been "
+			"occupied", dev_id);
+		return -1;
+	}
+
+	return i;
+}
+
+static int
+update_reorder_buff(uint8_t dev_id, struct scheduler_private *internal)
+{
+	char reorder_buff_name[32];
+	uint32_t reorder_buff_size = (internal->nb_slaves[SCHED_HW_CDEV] +
+			internal->nb_slaves[SCHED_SW_CDEV]) *
+			PER_SLAVE_BUFF_SIZE;
+
+	if (!internal->use_reorder)
+		return 0;
+
+	if (reorder_buff_size == 0) {
+		if (internal->reorder_buff)
+			rte_reorder_free(internal->reorder_buff);
+		internal->reorder_buff = NULL;
+		return 0;
+	}
+
+	if (internal->reorder_buff)
+		rte_reorder_free(internal->reorder_buff);
+
+	if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+		"%s_rb_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+		dev_id) < 0) {
+		CS_LOG_ERR("failed to create unique reorder buffer name");
+		return -EFAULT;
+	}
+
+	internal->reorder_buff = rte_reorder_create(
+		reorder_buff_name, rte_socket_id(),
+		reorder_buff_size);
+
+	if (internal->reorder_buff == NULL) {
+		CS_LOG_ERR("failed to allocate reorder buffer");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static int
+update_sched_capabilities(struct scheduler_private *internal,
+	const struct rte_cryptodev_capabilities *attach_caps)
+{
+	struct rte_cryptodev_capabilities *cap;
+	const struct rte_cryptodev_capabilities *a_cap;
+	uint32_t nb_caps = 0;
+	uint32_t nb_attached_caps = 0, nb_common_caps;
+	uint32_t cap_size = sizeof(struct rte_cryptodev_capabilities);
+	uint32_t i;
+
+	/* find out how many caps the scheduler already has */
+	while (internal->capabilities[nb_attached_caps].op !=
+		RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_attached_caps++;
+
+	/* find out how many capabilities the cdev-to-be-attached has */
+	while (attach_caps[nb_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_caps++;
+
+	nb_common_caps = nb_attached_caps;
+
+	/* init, memcpy whole */
+	if (nb_attached_caps == 0) {
+		if (nb_caps > MAX_CAP_NUM) {
+			CS_LOG_ERR("too many capability items");
+			return -ENOMEM;
+		}
+
+		memset(internal->capabilities, 0, cap_size * MAX_CAP_NUM);
+
+		rte_memcpy(internal->capabilities, attach_caps,
+			cap_size * nb_caps);
+		return 0;
+	}
+
+
+	/* find common capabilities between slave-to-be-attached and self */
+	i = 0;
+
+	while (internal->capabilities[i].op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
+		cap = &internal->capabilities[i];
+		uint32_t j = 0;
+
+		while (attach_caps[j].op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
+			a_cap = &attach_caps[j];
+
+			if (a_cap->op != cap->op || a_cap->sym.xform_type !=
+				cap->sym.xform_type) {
+				j++;
+				continue;
+			}
+
+			if (a_cap->sym.xform_type == RTE_CRYPTO_SYM_XFORM_AUTH)
+				if (a_cap->sym.auth.algo !=
+					cap->sym.auth.algo) {
+					j++;
+					continue;
+				}
+
+			if (a_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (a_cap->sym.cipher.algo !=
+					cap->sym.cipher.algo) {
+					j++;
+					continue;
+				}
+
+			break;
+		}
+
+		if (j >= nb_attached_caps)
+			nb_common_caps--;
+
+		i++;
+	}
+
+	/* no common capabilities, quit */
+	if (nb_common_caps == 0) {
+		CS_LOG_ERR("incompatible capabilities");
+		return -1;
+	}
+
+	/* remove the capabilities of the scheduler not exist in the cdev*/
+	i = 0;
+	while (internal->capabilities[i].op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
+		cap = &internal->capabilities[i];
+		uint32_t j = 0;
+
+		while (attach_caps[j].op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
+			a_cap = &attach_caps[j];
+
+			if (a_cap->op != cap->op || a_cap->sym.xform_type !=
+				cap->sym.xform_type) {
+				j++;
+				continue;
+			}
+
+			if (a_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (a_cap->sym.auth.algo !=
+					cap->sym.auth.algo) {
+					j++;
+					continue;
+				}
+
+				/* update digest size of the scheduler,
+				 * as AESNI-MB PMD only use truncated
+				 * digest size.
+				 */
+				cap->sym.auth.digest_size.min =
+					a_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					a_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					a_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					a_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+				break;
+			}
+
+			if (a_cap->sym.xform_type ==
+				RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (a_cap->sym.cipher.algo !=
+					cap->sym.cipher.algo) {
+					j++;
+					continue;
+				}
+
+			break;
+		}
+
+		if (j == nb_attached_caps) {
+			uint32_t k;
+
+			for (k = i + 1; k < nb_attached_caps; k++)
+				rte_memcpy(&internal->capabilities[k - 1],
+					&internal->capabilities[k], cap_size);
+
+			memset(&internal->capabilities[
+				nb_attached_caps], 0, cap_size);
+
+			nb_attached_caps--;
+		}
+
+		i++;
+	}
+
+	return 0;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_attach_dev(uint8_t dev_id, uint8_t slave_dev_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(dev_id);
+	struct rte_cryptodev *slave_dev =
+		rte_cryptodev_pmd_get_dev(slave_dev_id);
+	struct scheduler_private *internal;
+	struct slave_info *slave;
+	struct rte_cryptodev_info dev_info;
+	uint8_t *idx;
+	int status;
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	internal = (struct scheduler_private *)dev->data->dev_private;
+
+	rte_cryptodev_info_get(slave_dev_id, &dev_info);
+
+	if (dev_info.feature_flags & RTE_CRYPTODEV_FF_HW_ACCELERATED) {
+		idx = &internal->nb_slaves[SCHED_HW_CDEV];
+		slave = &internal->slaves[SCHED_HW_CDEV][*idx];
+	} else {
+		idx = &internal->nb_slaves[SCHED_SW_CDEV];
+		slave = &internal->slaves[SCHED_SW_CDEV][*idx];
+	}
+
+	if (*idx + 1 >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("too many devices attached");
+		return -ENOMEM;
+	}
+
+	if (update_sched_capabilities(internal, dev_info.capabilities) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	slave->dev_id = slave_dev_id;
+	status = request_qp(slave_dev_id);
+	if (status < 0)
+		return -EFAULT;
+	slave->qp_id = (uint16_t)status;
+
+	internal->max_nb_sessions = dev_info.sym.max_nb_sessions <
+		internal->max_nb_sessions ?
+		dev_info.sym.max_nb_sessions : internal->max_nb_sessions;
+
+	dev->feature_flags |= slave_dev->feature_flags;
+
+	*idx += 1;
+
+	return update_reorder_buff(dev_id, internal);
+}
+
+
+int
+rte_crpytodev_scheduler_set_mode(uint8_t dev_id,
+	enum crypto_scheduling_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(dev_id);
+	struct scheduler_private *internal = dev->data->dev_private;
+
+	if (mode < CRYPTO_SCHED_SW_ROUND_ROBIN_MODE ||
+		mode >= CRYPTO_SCHED_N_MODES)
+		return -1;
+
+	if (mode == CRYPTO_SCHED_SW_ROUND_ROBIN_MODE) {
+		if (internal->nb_slaves[SCHED_SW_CDEV] == 0)
+			return -1;
+		internal->use_dev_type = SCHED_SW_CDEV;
+	}
+
+	if (mode == CRYPTO_SCHED_HW_ROUND_ROBIN_MODE) {
+		if (internal->nb_slaves[SCHED_HW_CDEV] == 0)
+			return -1;
+		internal->use_dev_type = SCHED_HW_CDEV;
+	}
+
+	scheduler_update_rx_tx_ops(dev, mode, internal->use_reorder);
+
+	internal->mode = mode;
+
+	return 0;
+}
+
+void
+rte_crpytodev_scheduler_get_mode(uint8_t dev_id,
+		enum crypto_scheduling_mode *mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(dev_id);
+	struct scheduler_private *internal = dev->data->dev_private;
+
+	if (!mode)
+		return;
+
+	*mode = internal->mode;
+}
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..5775037
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,90 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum crypto_scheduling_mode {
+	/* <Round Robin Mode amongst all software slave cdevs */
+	CRYPTO_SCHED_SW_ROUND_ROBIN_MODE = 1,
+	/* <Round Robin Mode amongst all hardware slave cdevs */
+	CRYPTO_SCHED_HW_ROUND_ROBIN_MODE,
+	CRYPTO_SCHED_N_MODES /* number of modes */
+};
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		slave_dev_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_attach_dev(uint8_t dev_id, uint8_t slave_dev_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_set_mode(uint8_t dev_id,
+		enum crypto_scheduling_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+void
+rte_crpytodev_scheduler_get_mode(uint8_t dev_id,
+		enum crypto_scheduling_mode *mode);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..dab1bfe
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,8 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_attach_dev;
+	rte_crpytodev_scheduler_set_mode;
+	rte_crpytodev_scheduler_get_mode;
+
+} DPDK_17.02;
\ No newline at end of file
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..37a8b64
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,475 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+#define SCHEDULER_MAX_NB_QP_ARG		"max_nb_queue_pairs"
+#define SCHEDULER_MAX_NB_SESS_ARG	"max_nb_sessions"
+#define SCHEDULER_SOCKET_ID		"socket_id"
+#define SCHEDULER_ENABLE_REORDER_ARG	"enable_reorder"
+
+const char *scheduler_vdev_valid_params[] = {
+	SCHEDULER_MAX_NB_QP_ARG,
+	SCHEDULER_MAX_NB_SESS_ARG,
+	SCHEDULER_SOCKET_ID,
+	SCHEDULER_ENABLE_REORDER_ARG,
+};
+
+/** Round robin mode burst enqueue */
+static uint16_t
+scheduler_enqueue_burst_rr(void *queue_pair,
+	struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	uint16_t i, processed_ops;
+	struct scheduler_qp *qp = (struct scheduler_qp *)queue_pair;
+	struct scheduler_private *internal = qp->dev_priv;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+	uint8_t dev_type_idx = internal->use_dev_type;
+	uint8_t dev_idx = internal->last_enq_idx[dev_type_idx];
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+			ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+			ops[i + 1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+			ops[i + 2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+			ops[i + 3]->sym->session->_private;
+
+		ops[i]->sym->session =
+			sess0->slave_sesses[dev_type_idx][dev_idx];
+		ops[i + 1]->sym->session =
+			sess1->slave_sesses[dev_type_idx][dev_idx];
+		ops[i + 2]->sym->session =
+			sess2->slave_sesses[dev_type_idx][dev_idx];
+		ops[i + 3]->sym->session =
+			sess3->slave_sesses[dev_type_idx][dev_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+			ops[i]->sym->session->_private;
+		ops[i]->sym->session =
+			sess0->slave_sesses[dev_type_idx][dev_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(
+		internal->slaves[dev_type_idx][dev_idx].dev_id,
+		internal->slaves[dev_type_idx][dev_idx].qp_id,
+		ops, nb_ops);
+
+	internal->last_enq_idx[dev_type_idx] += 1;
+
+	if (unlikely(internal->last_enq_idx[dev_type_idx] >=
+		internal->nb_slaves[dev_type_idx]))
+		internal->last_enq_idx[dev_type_idx] = 0;
+
+	qp->stats.enqueued_count += processed_ops;
+
+	return processed_ops;
+}
+
+/** Round robin mode burst dequeue without post-reorder */
+static uint16_t
+scheduler_dequeue_burst_rr_no_reorder(void *queue_pair,
+	struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	uint16_t nb_deq_ops;
+	struct scheduler_qp *qp = (struct scheduler_qp *)queue_pair;
+	struct scheduler_private *internal = qp->dev_priv;
+	uint8_t dev_type_idx = internal->use_dev_type;
+	uint8_t dev_idx = internal->last_deq_idx[dev_type_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(
+		internal->slaves[dev_type_idx][dev_idx].dev_id,
+		internal->slaves[dev_type_idx][dev_idx].qp_id,
+		ops, nb_ops);
+
+	internal->last_deq_idx[dev_type_idx] += 1;
+	if (unlikely(internal->last_deq_idx[dev_type_idx] >=
+		internal->nb_slaves[dev_type_idx]))
+		internal->last_deq_idx[dev_type_idx] = 0;
+
+	qp->stats.dequeued_count += nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+/** Round robin mode burst dequeue with post-reorder */
+static uint16_t
+scheduler_dequeue_burst_rr_reorder(void *queue_pair,
+	struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	uint16_t i, nb_deq_ops;
+	const uint16_t nb_op_ops = nb_ops;
+	struct scheduler_qp *qp = (struct scheduler_qp *)queue_pair;
+	struct scheduler_private *internal = qp->dev_priv;
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_reorder_buffer *reorder_buff =
+		(struct rte_reorder_buffer *)internal->reorder_buff;
+	uint8_t dev_type_idx = internal->use_dev_type;
+	uint8_t dev_idx = internal->last_deq_idx[dev_type_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(
+		internal->slaves[dev_type_idx][dev_idx].dev_id,
+		internal->slaves[dev_type_idx][dev_idx].qp_id,
+		op_ops, nb_ops);
+
+	internal->last_deq_idx[dev_type_idx] += 1;
+	if (unlikely(internal->last_deq_idx[dev_type_idx] >=
+		internal->nb_slaves[dev_type_idx]))
+		internal->last_deq_idx[dev_type_idx] = 0;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; i < nb_deq_ops - 8; i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_memcpy(mbuf1->buf_addr, &op_ops[i + 1],
+			sizeof(op_ops[i + 1]));
+		rte_memcpy(mbuf2->buf_addr, &op_ops[i + 2],
+			sizeof(op_ops[i + 2]));
+		rte_memcpy(mbuf3->buf_addr, &op_ops[i + 3],
+			sizeof(op_ops[i + 3]));
+
+		mbuf0->seqn = internal->seqn++;
+		mbuf1->seqn = internal->seqn++;
+		mbuf2->seqn = internal->seqn++;
+		mbuf3->seqn = internal->seqn++;
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+
+		mbuf0->seqn = internal->seqn++;
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_deq_ops = rte_reorder_drain(reorder_buff, reorder_mbufs,
+		nb_ops);
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; i < nb_deq_ops - 8; i += 4) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->buf_addr;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->buf_addr;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->buf_addr;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->buf_addr;
+
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 1]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 2]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 3]->buf_addr = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->buf_addr;
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+	}
+
+	qp->stats.dequeued_count += nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+int
+scheduler_update_rx_tx_ops(struct rte_cryptodev *dev,
+	enum crypto_scheduling_mode mode, uint32_t use_reorder)
+{
+	switch (mode) {
+	case CRYPTO_SCHED_SW_ROUND_ROBIN_MODE:
+	case CRYPTO_SCHED_HW_ROUND_ROBIN_MODE:
+		dev->enqueue_burst = scheduler_enqueue_burst_rr;
+		if (use_reorder)
+			dev->dequeue_burst =
+				scheduler_dequeue_burst_rr_reorder;
+		else
+			dev->dequeue_burst =
+				scheduler_dequeue_burst_rr_no_reorder;
+		break;
+	default:
+		return -1;
+	}
+
+	return 0;
+}
+
+static uint32_t unique_name_id;
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct rte_crypto_vdev_init_params *init_params,
+	const uint8_t enable_reorder)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct scheduler_private *internal;
+	struct rte_cryptodev *dev;
+
+	if (snprintf(crypto_dev_name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%u",
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD), unique_name_id++) < 0) {
+		CS_LOG_ERR("driver %s: failed to create unique cryptodev "
+			"name", name);
+		return -EFAULT;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_private),
+			init_params->socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO;
+
+	internal = dev->data->dev_private;
+	internal->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+	internal->max_nb_sessions = UINT32_MAX;
+	internal->use_reorder = enable_reorder;
+
+	/* register rx/tx burst functions for data path
+	 * by default the software round robin mode is adopted
+	 */
+	return scheduler_update_rx_tx_ops(dev, CRYPTO_SCHED_SW_ROUND_ROBIN_MODE,
+		internal->use_reorder);
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_private *internal;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	internal = dev->data->dev_private;
+
+	if (internal->reorder_buff)
+		rte_reorder_free(internal->reorder_buff);
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+/** Parse integer from integer argument */
+static int
+parse_integer_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	int *i = (int *) extra_args;
+
+	*i = atoi(value);
+	if (*i < 0) {
+		CDEV_LOG_ERR("Argument has to be positive.");
+		return -1;
+	}
+
+	return 0;
+}
+
+/* Parse reorder enable/disable argument */
+static int
+scheduler_parse_enable_reorder_kvarg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	if (value == NULL || extra_args == NULL)
+		return -1;
+
+	if (strcmp(value, "yes") == 0)
+		*(uint8_t *)extra_args = 1;
+	else if (strcmp(value, "no") == 0)
+		*(uint8_t *)extra_args = 0;
+	else
+		return -1;
+
+	return 0;
+}
+
+static uint8_t
+number_of_sockets(void)
+{
+	int sockets = 0;
+	int i;
+	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
+
+	for (i = 0; ((i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL)); i++) {
+		if (sockets < ms[i].socket_id)
+			sockets = ms[i].socket_id;
+	}
+
+	/* Number of sockets = maximum socket_id + 1 */
+	return ++sockets;
+}
+
+static int
+scheduler_parse_init_params(struct rte_crypto_vdev_init_params *params,
+	uint8_t *enable_reorder, const char *input_args)
+{
+	struct rte_kvargs *kvlist = NULL;
+	int ret = 0;
+
+	if (params == NULL)
+		return -EINVAL;
+
+	if (!input_args)
+		return 0;
+
+	kvlist = rte_kvargs_parse(input_args,
+			scheduler_vdev_valid_params);
+	if (kvlist == NULL)
+		return -1;
+
+	ret = rte_kvargs_process(kvlist, SCHEDULER_MAX_NB_QP_ARG,
+		&parse_integer_arg, &params->max_nb_queue_pairs);
+	if (ret < 0)
+		goto free_kvlist;
+
+	ret = rte_kvargs_process(kvlist, SCHEDULER_MAX_NB_SESS_ARG,
+		&parse_integer_arg, &params->max_nb_sessions);
+	if (ret < 0)
+		goto free_kvlist;
+
+	ret = rte_kvargs_process(kvlist, SCHEDULER_SOCKET_ID,
+		&parse_integer_arg, &params->socket_id);
+	if (ret < 0)
+		goto free_kvlist;
+
+	if (params->socket_id >= number_of_sockets()) {
+		CDEV_LOG_ERR("Invalid socket id specified to create "
+			"the virtual crypto device on");
+		goto free_kvlist;
+	}
+
+	ret = rte_kvargs_process(kvlist, SCHEDULER_ENABLE_REORDER_ARG,
+		&scheduler_parse_enable_reorder_kvarg, enable_reorder);
+	if (ret < 0)
+		goto free_kvlist;
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct rte_crypto_vdev_init_params init_params = {
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+		rte_socket_id()
+	};
+	uint8_t enable_reorder = 0;
+
+	scheduler_parse_init_params(&init_params, &enable_reorder, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.max_nb_sessions);
+
+	return cryptodev_scheduler_create(name, &init_params, enable_reorder);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int> "
+	"enable_reorder=yes/no");
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..a98a127
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,335 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "../scheduler/scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_private *internal = dev->data->dev_private;
+	uint32_t i, j;
+
+	/* TODO: this may cause one dev being started multiple times,
+	 * so far as all dev's start functions only returns 0, so it doesn't
+	 * matter yet. However whenever a new dev driver is added and doesn't
+	 * allow its start func being called more than once, this need to
+	 * be updated.
+	 */
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < internal->nb_slaves[i]; j++) {
+			int status = rte_cryptodev_start(
+				internal->slaves[i][j].dev_id);
+			if (status < 0) {
+				CS_LOG_ERR("cannot start device %u",
+					internal->slaves[i][j].dev_id);
+				return status;
+			}
+		}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_private *internal = dev->data->dev_private;
+	uint32_t i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < internal->nb_slaves[i]; j++)
+			rte_cryptodev_stop(internal->slaves[i][j].dev_id);
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_private *internal = dev->data->dev_private;
+	uint32_t i, j;
+
+	for (i = 0; i < 2; i++)
+		for (j = 0; j < internal->nb_slaves[i]; j++) {
+			int status = rte_cryptodev_close(
+				internal->slaves[i][j].dev_id);
+			if (status < 0) {
+				CS_LOG_ERR("cannot close device %u",
+					internal->slaves[i][j].dev_id);
+				return status;
+			}
+		}
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct scheduler_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->stats.enqueued_count;
+		stats->dequeued_count += qp->stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct scheduler_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->stats, 0, sizeof(qp->stats));
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_private *internal = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = internal->capabilities;
+		dev_info->max_nb_queue_pairs = internal->max_nb_queue_pairs;
+		dev_info->sym.max_nb_sessions = internal->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("CRYPTO-SCHEDULER PMD Queue Pair",
+		sizeof(*qp), RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return -ENOMEM;
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+	memset(&qp->stats, 0, sizeof(qp->stats));
+
+	if (snprintf(qp->name, sizeof(qp->name),
+		"scheduler_pmd_%u_qp_%u", dev->data->dev_id,
+		qp->id) > (int)sizeof(qp->name)) {
+		CS_LOG_ERR("unable to create unique name for queue pair");
+		rte_free(qp);
+		return -EFAULT;
+	}
+
+	qp->dev_priv = dev->data->dev_private;
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static unsigned
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sessions(struct scheduler_private *internal,
+	struct rte_crypto_sym_xform *xform,
+	struct scheduler_session *sess,
+	uint32_t create)
+{
+
+	uint32_t i, j;
+
+	for (i = 0; i < 2; i++) {
+		for (j = 0; j < internal->nb_slaves[i]; j++) {
+			uint8_t dev_id = internal->slaves[i][j].dev_id;
+			struct rte_cryptodev *dev = &rte_cryptodev_globals->
+				devs[dev_id];
+
+			/* clear */
+			if (!create) {
+				if (!sess->slave_sesses[i][j])
+					continue;
+
+				dev->dev_ops->session_clear(dev,
+					(void *)sess->slave_sesses[i][j]);
+				sess->slave_sesses[i][j] = NULL;
+
+				continue;
+			}
+
+			/* configure */
+			if (sess->slave_sesses[i][j] == NULL)
+				sess->slave_sesses[i][j] =
+					rte_cryptodev_sym_session_create(
+						dev_id, xform);
+			else
+				sess->slave_sesses[i][j] =
+					dev->dev_ops->session_configure(dev,
+						xform,
+						sess->slave_sesses[i][j]);
+
+			if (!sess->slave_sesses[i][j]) {
+				CS_LOG_ERR("unabled to config sym session");
+				config_slave_sessions(internal, NULL, sess, 0);
+				return -1;
+			}
+		}
+
+		for (j = internal->nb_slaves[i]; j < MAX_SLAVES_NUM; j++)
+			sess->slave_sesses[i][j] = NULL;
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_private *internal = dev->data->dev_private;
+
+	config_slave_sessions(internal, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_private *internal = dev->data->dev_private;
+
+	if (config_slave_sessions(internal, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		scheduler_pmd_session_clear(dev, sess);
+		return NULL;
+	}
+
+	return sess;
+}
+
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..db605b8
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,137 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_cryptodev_scheduler.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+/**< Maximum number of bonded capabilities */
+#ifndef MAX_CAP_NUM
+#define MAX_CAP_NUM				(32)
+#endif
+
+/**< Maximum Crypto OP burst number */
+#ifndef MAX_OP_BURST_NUM
+#define	MAX_OP_BURST_NUM			(32)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+/* global hash table storing occupied cdev/qp info */
+struct rte_hash *dev_qp_map;
+
+struct slave_info {
+	uint8_t dev_id;
+	uint16_t qp_id;
+};
+
+#define SCHED_SW_CDEV	0
+#define SCHED_HW_CDEV	1
+
+/* function pointer for different modes' enqueue/dequeue ops */
+typedef uint16_t (*sched_enq_deq_t)(void *queue_pair,
+	struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct scheduler_private {
+	struct slave_info slaves[2][MAX_SLAVES_NUM];
+	uint8_t nb_slaves[2];
+	uint8_t last_enq_idx[2];
+	uint8_t last_deq_idx[2];
+
+	void *reorder_buff;
+
+	sched_enq_deq_t enqueue;
+	sched_enq_deq_t dequeue;
+
+	enum crypto_scheduling_mode mode;
+
+	uint32_t seqn;
+	uint8_t use_dev_type;
+
+	uint8_t use_reorder;
+
+	struct rte_cryptodev_capabilities
+		capabilities[MAX_CAP_NUM];
+	uint32_t max_nb_queue_pairs;
+	uint32_t max_nb_sessions;
+} __rte_cache_aligned;
+
+struct scheduler_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	struct rte_cryptodev_stats stats;
+	/**< Queue pair statistics */
+	struct scheduler_private *dev_priv;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *slave_sesses[2][MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+int
+scheduler_update_rx_tx_ops(struct rte_cryptodev *dev,
+	enum crypto_scheduling_mode mode, uint32_t use_reorder);
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 8f63e8f..3aa70af 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,7 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +78,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
 };
 
 extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..ee34688 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -98,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
@@ -145,6 +145,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER)  += -lrte_pmd_crypto_scheduler
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-02 14:15 [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd Fan Zhang
@ 2016-12-02 14:31 ` Thomas Monjalon
  2016-12-02 14:57   ` Bruce Richardson
  2017-01-03 17:08 ` [dpdk-dev] [PATCH v2] " Fan Zhang
  2017-01-03 17:16 ` [dpdk-dev] [PATCH v3] " Fan Zhang
  2 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2016-12-02 14:31 UTC (permalink / raw)
  To: Fan Zhang; +Cc: dev, declan.doherty

2016-12-02 14:15, Fan Zhang:
> This patch provides the initial implementation of the scheduler poll mode
> driver using DPDK cryptodev framework.
> 
> Scheduler PMD is used to schedule and enqueue the crypto ops to the
> hardware and/or software crypto devices attached to it (slaves). The
> dequeue operation from the slave(s), and the possible dequeued crypto op
> reordering, are then carried out by the scheduler.
> 
> The scheduler PMD can be used to fill the throughput gap between the
> physical core and the existing cryptodevs to increase the overall
> performance. For example, if a physical core has higher crypto op
> processing rate than a cryptodev, the scheduler PMD can be introduced to
> attach more than one cryptodevs.
> 
> This initial implementation is limited to supporting the following
> scheduling modes:
> 
> - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
>     slave cryptodevs, to set this mode, the scheduler should have been
>     attached 1 or more software cryptodevs.
> 
> - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
>     slave cryptodevs (QAT), to set this mode, the scheduler should have
>     been attached 1 or more QATs.

Could it be implemented on top of the eventdev API?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-02 14:31 ` Thomas Monjalon
@ 2016-12-02 14:57   ` Bruce Richardson
  2016-12-02 16:22     ` Declan Doherty
  0 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-12-02 14:57 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Fan Zhang, dev, declan.doherty

On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
> 2016-12-02 14:15, Fan Zhang:
> > This patch provides the initial implementation of the scheduler poll mode
> > driver using DPDK cryptodev framework.
> > 
> > Scheduler PMD is used to schedule and enqueue the crypto ops to the
> > hardware and/or software crypto devices attached to it (slaves). The
> > dequeue operation from the slave(s), and the possible dequeued crypto op
> > reordering, are then carried out by the scheduler.
> > 
> > The scheduler PMD can be used to fill the throughput gap between the
> > physical core and the existing cryptodevs to increase the overall
> > performance. For example, if a physical core has higher crypto op
> > processing rate than a cryptodev, the scheduler PMD can be introduced to
> > attach more than one cryptodevs.
> > 
> > This initial implementation is limited to supporting the following
> > scheduling modes:
> > 
> > - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
> >     slave cryptodevs, to set this mode, the scheduler should have been
> >     attached 1 or more software cryptodevs.
> > 
> > - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
> >     slave cryptodevs (QAT), to set this mode, the scheduler should have
> >     been attached 1 or more QATs.
> 
> Could it be implemented on top of the eventdev API?
> 
Not really. The eventdev API is for different types of scheduling
between multiple sources that are all polling for packets, compared to
this, which is more analgous - as I understand it - to the bonding PMD
for ethdev.

To make something like this work with an eventdev API you would need to
use one of the following models:
* have worker cores for offloading packets to the different crypto
  blocks pulling from the eventdev APIs. This would make it difficult to
  do any "smart" scheduling of crypto operations between the blocks,
  e.g. that one crypto instance may be better at certain types of
  operations than another.
* move the logic in this driver into an existing eventdev instance,
  which uses the eventdev api rather than the crypto APIs and so has an
  extra level of "structure abstraction" that has to be worked though.
  It's just not really a good fit.

So for this workload, I believe the pseudo-cryptodev instance is the
best way to go.

/Bruce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-02 14:57   ` Bruce Richardson
@ 2016-12-02 16:22     ` Declan Doherty
  2016-12-05 15:12       ` Neil Horman
  0 siblings, 1 reply; 42+ messages in thread
From: Declan Doherty @ 2016-12-02 16:22 UTC (permalink / raw)
  To: Bruce Richardson, Thomas Monjalon; +Cc: Fan Zhang, dev

On 02/12/16 14:57, Bruce Richardson wrote:
> On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
>> 2016-12-02 14:15, Fan Zhang:
>>> This patch provides the initial implementation of the scheduler poll mode
>>> driver using DPDK cryptodev framework.
>>>
>>> Scheduler PMD is used to schedule and enqueue the crypto ops to the
>>> hardware and/or software crypto devices attached to it (slaves). The
>>> dequeue operation from the slave(s), and the possible dequeued crypto op
>>> reordering, are then carried out by the scheduler.
>>>
>>> The scheduler PMD can be used to fill the throughput gap between the
>>> physical core and the existing cryptodevs to increase the overall
>>> performance. For example, if a physical core has higher crypto op
>>> processing rate than a cryptodev, the scheduler PMD can be introduced to
>>> attach more than one cryptodevs.
>>>
>>> This initial implementation is limited to supporting the following
>>> scheduling modes:
>>>
>>> - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
>>>     slave cryptodevs, to set this mode, the scheduler should have been
>>>     attached 1 or more software cryptodevs.
>>>
>>> - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
>>>     slave cryptodevs (QAT), to set this mode, the scheduler should have
>>>     been attached 1 or more QATs.
>>
>> Could it be implemented on top of the eventdev API?
>>
> Not really. The eventdev API is for different types of scheduling
> between multiple sources that are all polling for packets, compared to
> this, which is more analgous - as I understand it - to the bonding PMD
> for ethdev.
>
> To make something like this work with an eventdev API you would need to
> use one of the following models:
> * have worker cores for offloading packets to the different crypto
>   blocks pulling from the eventdev APIs. This would make it difficult to
>   do any "smart" scheduling of crypto operations between the blocks,
>   e.g. that one crypto instance may be better at certain types of
>   operations than another.
> * move the logic in this driver into an existing eventdev instance,
>   which uses the eventdev api rather than the crypto APIs and so has an
>   extra level of "structure abstraction" that has to be worked though.
>   It's just not really a good fit.
>
> So for this workload, I believe the pseudo-cryptodev instance is the
> best way to go.
>
> /Bruce
>


As Bruce says this is much more analogous to the ethdev bonding driver, 
the main idea is to allow different crypto op scheduling mechanisms to 
be defined transparently to an application. This could be load-balancing 
across multiple hw crypto devices, or having a software crypto device to 
act as a backup device for a hw accelerator if it becomes 
oversubscribed. I think the main advantage of a crypto-scheduler 
approach means that the data path of the application doesn't need to 
have any knowledge that scheduling is happening at all, it is just using 
a different crypto device id, which is then manages the distribution of 
crypto work.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-02 16:22     ` Declan Doherty
@ 2016-12-05 15:12       ` Neil Horman
  2016-12-07 12:42         ` Declan Doherty
  0 siblings, 1 reply; 42+ messages in thread
From: Neil Horman @ 2016-12-05 15:12 UTC (permalink / raw)
  To: Declan Doherty; +Cc: Bruce Richardson, Thomas Monjalon, Fan Zhang, dev

On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
> On 02/12/16 14:57, Bruce Richardson wrote:
> > On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
> > > 2016-12-02 14:15, Fan Zhang:
> > > > This patch provides the initial implementation of the scheduler poll mode
> > > > driver using DPDK cryptodev framework.
> > > > 
> > > > Scheduler PMD is used to schedule and enqueue the crypto ops to the
> > > > hardware and/or software crypto devices attached to it (slaves). The
> > > > dequeue operation from the slave(s), and the possible dequeued crypto op
> > > > reordering, are then carried out by the scheduler.
> > > > 
> > > > The scheduler PMD can be used to fill the throughput gap between the
> > > > physical core and the existing cryptodevs to increase the overall
> > > > performance. For example, if a physical core has higher crypto op
> > > > processing rate than a cryptodev, the scheduler PMD can be introduced to
> > > > attach more than one cryptodevs.
> > > > 
> > > > This initial implementation is limited to supporting the following
> > > > scheduling modes:
> > > > 
> > > > - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
> > > >     slave cryptodevs, to set this mode, the scheduler should have been
> > > >     attached 1 or more software cryptodevs.
> > > > 
> > > > - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
> > > >     slave cryptodevs (QAT), to set this mode, the scheduler should have
> > > >     been attached 1 or more QATs.
> > > 
> > > Could it be implemented on top of the eventdev API?
> > > 
> > Not really. The eventdev API is for different types of scheduling
> > between multiple sources that are all polling for packets, compared to
> > this, which is more analgous - as I understand it - to the bonding PMD
> > for ethdev.
> > 
> > To make something like this work with an eventdev API you would need to
> > use one of the following models:
> > * have worker cores for offloading packets to the different crypto
> >   blocks pulling from the eventdev APIs. This would make it difficult to
> >   do any "smart" scheduling of crypto operations between the blocks,
> >   e.g. that one crypto instance may be better at certain types of
> >   operations than another.
> > * move the logic in this driver into an existing eventdev instance,
> >   which uses the eventdev api rather than the crypto APIs and so has an
> >   extra level of "structure abstraction" that has to be worked though.
> >   It's just not really a good fit.
> > 
> > So for this workload, I believe the pseudo-cryptodev instance is the
> > best way to go.
> > 
> > /Bruce
> > 
> 
> 
> As Bruce says this is much more analogous to the ethdev bonding driver, the
> main idea is to allow different crypto op scheduling mechanisms to be
> defined transparently to an application. This could be load-balancing across
> multiple hw crypto devices, or having a software crypto device to act as a
> backup device for a hw accelerator if it becomes oversubscribed. I think the
> main advantage of a crypto-scheduler approach means that the data path of
> the application doesn't need to have any knowledge that scheduling is
> happening at all, it is just using a different crypto device id, which is
> then manages the distribution of crypto work.
> 
> 
> 
This is a good deal like the bonding pmd, and so from a certain standpoint it
makes sense to do this, but whereas the bonding pmd is meant to create a single
path to a logical network over several physical networks, this pmd really only
focuses on maximizing througput, and for that we already have tools.  As Thomas
mentions, there is the eventdev library, but from my view the distributor
library already fits this bill.  It already is a basic framework to process
mbufs in parallel according to whatever policy you want to implement, which
sounds like exactly what the goal of this pmd is.  

Neil
 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-05 15:12       ` Neil Horman
@ 2016-12-07 12:42         ` Declan Doherty
  2016-12-07 14:16           ` Neil Horman
  0 siblings, 1 reply; 42+ messages in thread
From: Declan Doherty @ 2016-12-07 12:42 UTC (permalink / raw)
  To: Neil Horman; +Cc: Bruce Richardson, Thomas Monjalon, Fan Zhang, dev

On 05/12/16 15:12, Neil Horman wrote:
> On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
>> On 02/12/16 14:57, Bruce Richardson wrote:
>>> On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
>>>> 2016-12-02 14:15, Fan Zhang:
>>>>> This patch provides the initial implementation of the scheduler poll mode
>>>>> driver using DPDK cryptodev framework.
>>>>>
>>>>> Scheduler PMD is used to schedule and enqueue the crypto ops to the
>>>>> hardware and/or software crypto devices attached to it (slaves). The
>>>>> dequeue operation from the slave(s), and the possible dequeued crypto op
>>>>> reordering, are then carried out by the scheduler.
>>>>>
>>>>> The scheduler PMD can be used to fill the throughput gap between the
>>>>> physical core and the existing cryptodevs to increase the overall
>>>>> performance. For example, if a physical core has higher crypto op
>>>>> processing rate than a cryptodev, the scheduler PMD can be introduced to
>>>>> attach more than one cryptodevs.
>>>>>
>>>>> This initial implementation is limited to supporting the following
>>>>> scheduling modes:
>>>>>
>>>>> - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
>>>>>     slave cryptodevs, to set this mode, the scheduler should have been
>>>>>     attached 1 or more software cryptodevs.
>>>>>
>>>>> - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
>>>>>     slave cryptodevs (QAT), to set this mode, the scheduler should have
>>>>>     been attached 1 or more QATs.
>>>>
>>>> Could it be implemented on top of the eventdev API?
>>>>
>>> Not really. The eventdev API is for different types of scheduling
>>> between multiple sources that are all polling for packets, compared to
>>> this, which is more analgous - as I understand it - to the bonding PMD
>>> for ethdev.
>>>
>>> To make something like this work with an eventdev API you would need to
>>> use one of the following models:
>>> * have worker cores for offloading packets to the different crypto
>>>   blocks pulling from the eventdev APIs. This would make it difficult to
>>>   do any "smart" scheduling of crypto operations between the blocks,
>>>   e.g. that one crypto instance may be better at certain types of
>>>   operations than another.
>>> * move the logic in this driver into an existing eventdev instance,
>>>   which uses the eventdev api rather than the crypto APIs and so has an
>>>   extra level of "structure abstraction" that has to be worked though.
>>>   It's just not really a good fit.
>>>
>>> So for this workload, I believe the pseudo-cryptodev instance is the
>>> best way to go.
>>>
>>> /Bruce
>>>
>>
>>
>> As Bruce says this is much more analogous to the ethdev bonding driver, the
>> main idea is to allow different crypto op scheduling mechanisms to be
>> defined transparently to an application. This could be load-balancing across
>> multiple hw crypto devices, or having a software crypto device to act as a
>> backup device for a hw accelerator if it becomes oversubscribed. I think the
>> main advantage of a crypto-scheduler approach means that the data path of
>> the application doesn't need to have any knowledge that scheduling is
>> happening at all, it is just using a different crypto device id, which is
>> then manages the distribution of crypto work.
>>
>>
>>
> This is a good deal like the bonding pmd, and so from a certain standpoint it
> makes sense to do this, but whereas the bonding pmd is meant to create a single
> path to a logical network over several physical networks, this pmd really only
> focuses on maximizing througput, and for that we already have tools.  As Thomas
> mentions, there is the eventdev library, but from my view the distributor
> library already fits this bill.  It already is a basic framework to process
> mbufs in parallel according to whatever policy you want to implement, which
> sounds like exactly what the goal of this pmd is.
>
> Neil
>
>

Hey Neil,

this is actually intended to act and look a good deal like the ethernet 
bonding device but to handling the crypto scheduling use cases.

For example, take the case where multiple hw accelerators may be 
available. We want to provide user applications with a mechanism to 
transparently balance work across all devices without having to manage 
the load balancing details or the guaranteeing of ordering of the 
processed ops on the dequeue_burst side. In this case the application 
would just use the crypto dev_id of the scheduler and it would look 
after balancing the workload across the available hw accelerators.


+-------------------+
|  Crypto Sch PMD   |
|                   |
| ORDERING / RR SCH |
+-------------------+
         ^ ^ ^
         | | |
       +-+ | +-------------------------------+
       |   +---------------+                 |
       |                   |                 |
       V                   V                 V
+---------------+ +---------------+ +---------------+
| Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD |
+---------------+ +---------------+ +---------------+

Another use case we hope to support is migration of processing from one 
device to another where a hw and sw crypto pmd can be bound to the same 
crypto scheduler and the crypto processing could be  transparently 
migrated from the hw to sw pmd. This would allow for hw accelerators to 
be hot-plugged attached/detached in a Guess VM

+----------------+
| Crypto Sch PMD |
|                |
| MIGRATION SCH  |
+----------------+
       | |
       | +-----------------+
       |                   |
       V                   V
+---------------+ +---------------+
| Crypto HW PMD | | Crypto SW PMD |
|   (Active)    | |   (Inactive)  |
+---------------+ +---------------+

The main point is that isn't envisaged as just a mechanism for 
scheduling crypto work loads across multiple cores, but a framework for 
allowing different scheduling mechanisms to be introduced, to handle 
different crypto scheduling problems, and done so in a way which  is 
completely transparent to the data path of an application. Like the eth 
bonding driver we want to support creating the crypto scheduler from EAL 
options, which allow specification of the scheduling mode and the crypto 
pmds which are to be bound to that crypto scheduler.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-07 12:42         ` Declan Doherty
@ 2016-12-07 14:16           ` Neil Horman
  2016-12-07 14:46             ` Richardson, Bruce
  0 siblings, 1 reply; 42+ messages in thread
From: Neil Horman @ 2016-12-07 14:16 UTC (permalink / raw)
  To: Declan Doherty; +Cc: Bruce Richardson, Thomas Monjalon, Fan Zhang, dev

On Wed, Dec 07, 2016 at 12:42:15PM +0000, Declan Doherty wrote:
> On 05/12/16 15:12, Neil Horman wrote:
> > On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
> > > On 02/12/16 14:57, Bruce Richardson wrote:
> > > > On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
> > > > > 2016-12-02 14:15, Fan Zhang:
> > > > > > This patch provides the initial implementation of the scheduler poll mode
> > > > > > driver using DPDK cryptodev framework.
> > > > > > 
> > > > > > Scheduler PMD is used to schedule and enqueue the crypto ops to the
> > > > > > hardware and/or software crypto devices attached to it (slaves). The
> > > > > > dequeue operation from the slave(s), and the possible dequeued crypto op
> > > > > > reordering, are then carried out by the scheduler.
> > > > > > 
> > > > > > The scheduler PMD can be used to fill the throughput gap between the
> > > > > > physical core and the existing cryptodevs to increase the overall
> > > > > > performance. For example, if a physical core has higher crypto op
> > > > > > processing rate than a cryptodev, the scheduler PMD can be introduced to
> > > > > > attach more than one cryptodevs.
> > > > > > 
> > > > > > This initial implementation is limited to supporting the following
> > > > > > scheduling modes:
> > > > > > 
> > > > > > - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software
> > > > > >     slave cryptodevs, to set this mode, the scheduler should have been
> > > > > >     attached 1 or more software cryptodevs.
> > > > > > 
> > > > > > - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware
> > > > > >     slave cryptodevs (QAT), to set this mode, the scheduler should have
> > > > > >     been attached 1 or more QATs.
> > > > > 
> > > > > Could it be implemented on top of the eventdev API?
> > > > > 
> > > > Not really. The eventdev API is for different types of scheduling
> > > > between multiple sources that are all polling for packets, compared to
> > > > this, which is more analgous - as I understand it - to the bonding PMD
> > > > for ethdev.
> > > > 
> > > > To make something like this work with an eventdev API you would need to
> > > > use one of the following models:
> > > > * have worker cores for offloading packets to the different crypto
> > > >   blocks pulling from the eventdev APIs. This would make it difficult to
> > > >   do any "smart" scheduling of crypto operations between the blocks,
> > > >   e.g. that one crypto instance may be better at certain types of
> > > >   operations than another.
> > > > * move the logic in this driver into an existing eventdev instance,
> > > >   which uses the eventdev api rather than the crypto APIs and so has an
> > > >   extra level of "structure abstraction" that has to be worked though.
> > > >   It's just not really a good fit.
> > > > 
> > > > So for this workload, I believe the pseudo-cryptodev instance is the
> > > > best way to go.
> > > > 
> > > > /Bruce
> > > > 
> > > 
> > > 
> > > As Bruce says this is much more analogous to the ethdev bonding driver, the
> > > main idea is to allow different crypto op scheduling mechanisms to be
> > > defined transparently to an application. This could be load-balancing across
> > > multiple hw crypto devices, or having a software crypto device to act as a
> > > backup device for a hw accelerator if it becomes oversubscribed. I think the
> > > main advantage of a crypto-scheduler approach means that the data path of
> > > the application doesn't need to have any knowledge that scheduling is
> > > happening at all, it is just using a different crypto device id, which is
> > > then manages the distribution of crypto work.
> > > 
> > > 
> > > 
> > This is a good deal like the bonding pmd, and so from a certain standpoint it
> > makes sense to do this, but whereas the bonding pmd is meant to create a single
> > path to a logical network over several physical networks, this pmd really only
> > focuses on maximizing througput, and for that we already have tools.  As Thomas
> > mentions, there is the eventdev library, but from my view the distributor
> > library already fits this bill.  It already is a basic framework to process
> > mbufs in parallel according to whatever policy you want to implement, which
> > sounds like exactly what the goal of this pmd is.
> > 
> > Neil
> > 
> > 
> 
> Hey Neil,
> 
> this is actually intended to act and look a good deal like the ethernet
> bonding device but to handling the crypto scheduling use cases.
> 
> For example, take the case where multiple hw accelerators may be available.
> We want to provide user applications with a mechanism to transparently
> balance work across all devices without having to manage the load balancing
> details or the guaranteeing of ordering of the processed ops on the
> dequeue_burst side. In this case the application would just use the crypto
> dev_id of the scheduler and it would look after balancing the workload
> across the available hw accelerators.
> 
> 
> +-------------------+
> |  Crypto Sch PMD   |
> |                   |
> | ORDERING / RR SCH |
> +-------------------+
>         ^ ^ ^
>         | | |
>       +-+ | +-------------------------------+
>       |   +---------------+                 |
>       |                   |                 |
>       V                   V                 V
> +---------------+ +---------------+ +---------------+
> | Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD |
> +---------------+ +---------------+ +---------------+
> 
> Another use case we hope to support is migration of processing from one
> device to another where a hw and sw crypto pmd can be bound to the same
> crypto scheduler and the crypto processing could be  transparently migrated
> from the hw to sw pmd. This would allow for hw accelerators to be
> hot-plugged attached/detached in a Guess VM
> 
> +----------------+
> | Crypto Sch PMD |
> |                |
> | MIGRATION SCH  |
> +----------------+
>       | |
>       | +-----------------+
>       |                   |
>       V                   V
> +---------------+ +---------------+
> | Crypto HW PMD | | Crypto SW PMD |
> |   (Active)    | |   (Inactive)  |
> +---------------+ +---------------+
> 
> The main point is that isn't envisaged as just a mechanism for scheduling
> crypto work loads across multiple cores, but a framework for allowing
> different scheduling mechanisms to be introduced, to handle different crypto
> scheduling problems, and done so in a way which  is completely transparent
> to the data path of an application. Like the eth bonding driver we want to
> support creating the crypto scheduler from EAL options, which allow
> specification of the scheduling mode and the crypto pmds which are to be
> bound to that crypto scheduler.
> 
> 
I get what its for, that much is pretty clear.  But whereas the bonding driver
benefits from creating a single device interface for the purposes of properly
routing traffic through the network stack without exposing that complexity to
the using application, this pmd provides only aggregation accoring to various
policies.  This is exactly what the distributor library was built for, and it
seems like a re-invention of the wheel to ignore that.  At the very least, you
should implement this pmd on top of the distributor library.  If that is
impracitcal, then I somewhat question why we have the distributor library at
all.

Neil
 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-07 14:16           ` Neil Horman
@ 2016-12-07 14:46             ` Richardson, Bruce
  2016-12-07 16:04               ` Declan Doherty
  0 siblings, 1 reply; 42+ messages in thread
From: Richardson, Bruce @ 2016-12-07 14:46 UTC (permalink / raw)
  To: Neil Horman, Doherty, Declan; +Cc: Thomas Monjalon, Zhang, Roy Fan, dev



> -----Original Message-----
> From: Neil Horman [mailto:nhorman@tuxdriver.com]
> Sent: Wednesday, December 7, 2016 2:17 PM
> To: Doherty, Declan <declan.doherty@intel.com>
> Cc: Richardson, Bruce <bruce.richardson@intel.com>; Thomas Monjalon
> <thomas.monjalon@6wind.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto
> pmd
> 
> On Wed, Dec 07, 2016 at 12:42:15PM +0000, Declan Doherty wrote:
> > On 05/12/16 15:12, Neil Horman wrote:
> > > On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
> > > > On 02/12/16 14:57, Bruce Richardson wrote:
> > > > > On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
> > > > > > 2016-12-02 14:15, Fan Zhang:
> > > > > > > This patch provides the initial implementation of the
> > > > > > > scheduler poll mode driver using DPDK cryptodev framework.
> > > > > > >
> > > > > > > Scheduler PMD is used to schedule and enqueue the crypto ops
> > > > > > > to the hardware and/or software crypto devices attached to
> > > > > > > it (slaves). The dequeue operation from the slave(s), and
> > > > > > > the possible dequeued crypto op reordering, are then carried
> out by the scheduler.
> > > > > > >
> > > > > > > The scheduler PMD can be used to fill the throughput gap
> > > > > > > between the physical core and the existing cryptodevs to
> > > > > > > increase the overall performance. For example, if a physical
> > > > > > > core has higher crypto op processing rate than a cryptodev,
> > > > > > > the scheduler PMD can be introduced to attach more than one
> cryptodevs.
> > > > > > >
> > > > > > > This initial implementation is limited to supporting the
> > > > > > > following scheduling modes:
> > > > > > >
> > > > > > > - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst
> attached software
> > > > > > >     slave cryptodevs, to set this mode, the scheduler should
> have been
> > > > > > >     attached 1 or more software cryptodevs.
> > > > > > >
> > > > > > > - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst
> attached hardware
> > > > > > >     slave cryptodevs (QAT), to set this mode, the scheduler
> should have
> > > > > > >     been attached 1 or more QATs.
> > > > > >
> > > > > > Could it be implemented on top of the eventdev API?
> > > > > >
> > > > > Not really. The eventdev API is for different types of
> > > > > scheduling between multiple sources that are all polling for
> > > > > packets, compared to this, which is more analgous - as I
> > > > > understand it - to the bonding PMD for ethdev.
> > > > >
> > > > > To make something like this work with an eventdev API you would
> > > > > need to use one of the following models:
> > > > > * have worker cores for offloading packets to the different crypto
> > > > >   blocks pulling from the eventdev APIs. This would make it
> difficult to
> > > > >   do any "smart" scheduling of crypto operations between the
> blocks,
> > > > >   e.g. that one crypto instance may be better at certain types of
> > > > >   operations than another.
> > > > > * move the logic in this driver into an existing eventdev
> instance,
> > > > >   which uses the eventdev api rather than the crypto APIs and so
> has an
> > > > >   extra level of "structure abstraction" that has to be worked
> though.
> > > > >   It's just not really a good fit.
> > > > >
> > > > > So for this workload, I believe the pseudo-cryptodev instance is
> > > > > the best way to go.
> > > > >
> > > > > /Bruce
> > > > >
> > > >
> > > >
> > > > As Bruce says this is much more analogous to the ethdev bonding
> > > > driver, the main idea is to allow different crypto op scheduling
> > > > mechanisms to be defined transparently to an application. This
> > > > could be load-balancing across multiple hw crypto devices, or
> > > > having a software crypto device to act as a backup device for a hw
> > > > accelerator if it becomes oversubscribed. I think the main
> > > > advantage of a crypto-scheduler approach means that the data path
> > > > of the application doesn't need to have any knowledge that
> > > > scheduling is happening at all, it is just using a different crypto
> device id, which is then manages the distribution of crypto work.
> > > >
> > > >
> > > >
> > > This is a good deal like the bonding pmd, and so from a certain
> > > standpoint it makes sense to do this, but whereas the bonding pmd is
> > > meant to create a single path to a logical network over several
> > > physical networks, this pmd really only focuses on maximizing
> > > througput, and for that we already have tools.  As Thomas mentions,
> > > there is the eventdev library, but from my view the distributor
> > > library already fits this bill.  It already is a basic framework to
> > > process mbufs in parallel according to whatever policy you want to
> implement, which sounds like exactly what the goal of this pmd is.
> > >
> > > Neil
> > >
> > >
> >
> > Hey Neil,
> >
> > this is actually intended to act and look a good deal like the
> > ethernet bonding device but to handling the crypto scheduling use cases.
> >
> > For example, take the case where multiple hw accelerators may be
> available.
> > We want to provide user applications with a mechanism to transparently
> > balance work across all devices without having to manage the load
> > balancing details or the guaranteeing of ordering of the processed ops
> > on the dequeue_burst side. In this case the application would just use
> > the crypto dev_id of the scheduler and it would look after balancing
> > the workload across the available hw accelerators.
> >
> >
> > +-------------------+
> > |  Crypto Sch PMD   |
> > |                   |
> > | ORDERING / RR SCH |
> > +-------------------+
> >         ^ ^ ^
> >         | | |
> >       +-+ | +-------------------------------+
> >       |   +---------------+                 |
> >       |                   |                 |
> >       V                   V                 V
> > +---------------+ +---------------+ +---------------+
> > | Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD |
> > +---------------+ +---------------+ +---------------+
> >
> > Another use case we hope to support is migration of processing from
> > one device to another where a hw and sw crypto pmd can be bound to the
> > same crypto scheduler and the crypto processing could be
> > transparently migrated from the hw to sw pmd. This would allow for hw
> > accelerators to be hot-plugged attached/detached in a Guess VM
> >
> > +----------------+
> > | Crypto Sch PMD |
> > |                |
> > | MIGRATION SCH  |
> > +----------------+
> >       | |
> >       | +-----------------+
> >       |                   |
> >       V                   V
> > +---------------+ +---------------+
> > | Crypto HW PMD | | Crypto SW PMD |
> > |   (Active)    | |   (Inactive)  |
> > +---------------+ +---------------+
> >
> > The main point is that isn't envisaged as just a mechanism for
> > scheduling crypto work loads across multiple cores, but a framework
> > for allowing different scheduling mechanisms to be introduced, to
> > handle different crypto scheduling problems, and done so in a way
> > which  is completely transparent to the data path of an application.
> > Like the eth bonding driver we want to support creating the crypto
> > scheduler from EAL options, which allow specification of the
> > scheduling mode and the crypto pmds which are to be bound to that crypto
> scheduler.
> >
> >
> I get what its for, that much is pretty clear.  But whereas the bonding
> driver benefits from creating a single device interface for the purposes
> of properly routing traffic through the network stack without exposing
> that complexity to the using application, this pmd provides only
> aggregation accoring to various policies.  This is exactly what the
> distributor library was built for, and it seems like a re-invention of the
> wheel to ignore that.  At the very least, you should implement this pmd on
> top of the distributor library.  If that is impracitcal, then I somewhat
> question why we have the distributor library at all.
> 
> Neil
> 

Hi Neil,

The distributor library, and the eventdev framework are not the solution here, as, firstly, the crypto devices are not cores, in the same way that ethdev's are not cores, and the distributor library is for evenly distributing work among cores. Sure, some crypto implementations may be software only, but many aren't, and those that are software still appear as a device to software that must be used like they were a HW device. In the same way that to use distributor to load balance traffic between various TX ports is not a suitable solution - because you need to use cores to do the work "bridging" between the distributor/eventdev and the ethdev device, similarly here, if we distribute traffic using the distributor, you need cores to pull those packets from the distributor and offload them to the crypto devices. To use the distributor library in place of this vpmd, we'd need crypto devices which are aware of how to talk to the distributor, and use it's protocols for pushing/pulling packets, or else we are pulling in extra core cycles to do bridging work.

Secondly, the distributor and eventdev libraries are designed for doing flow based (generally atomic) packet distribution. Load balancing between crypto devices is not generally based on flows, but rather on other factors like packet size, offload cost per device, etc. To distributor/eventdev, all workers are equal, but for working with devices, for crypto offload or nic transmission, that is plainly not the case. In short the distribution problems that are being solved by distributor and eventdev libraries are fundamentally different than those being solved by this vpmd. They would be the wrong tool for the job.

I would agree with the previous statements that this driver is far closer in functionality to the bonded ethdev driver than anything else. It makes multiple devices appear as a single one while hiding the complexity of the multiple devices to the using application. In the same way as the bonded ethdev driver has different modes for active-backup, and for active-active for increased throughput, this vpmd for crypto can have the exact same modes - multiple active bonded devices for higher performance operation, or two devices in active backup to enable migration when using SR-IOV as described by Declan above.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-07 14:46             ` Richardson, Bruce
@ 2016-12-07 16:04               ` Declan Doherty
  2016-12-08 14:57                 ` Neil Horman
  0 siblings, 1 reply; 42+ messages in thread
From: Declan Doherty @ 2016-12-07 16:04 UTC (permalink / raw)
  To: Richardson, Bruce, Neil Horman; +Cc: Thomas Monjalon, Zhang, Roy Fan, dev

On 07/12/16 14:46, Richardson, Bruce wrote:
>
>
>> -----Original Message-----
>> From: Neil Horman [mailto:nhorman@tuxdriver.com]
>> Sent: Wednesday, December 7, 2016 2:17 PM
>> To: Doherty, Declan <declan.doherty@intel.com>
>> Cc: Richardson, Bruce <bruce.richardson@intel.com>; Thomas Monjalon
>> <thomas.monjalon@6wind.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
>> dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto
>> pmd
>>
>> On Wed, Dec 07, 2016 at 12:42:15PM +0000, Declan Doherty wrote:
>>> On 05/12/16 15:12, Neil Horman wrote:
>>>> On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
>>>>> On 02/12/16 14:57, Bruce Richardson wrote:
>>>>>> On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
>>>>>>> 2016-12-02 14:15, Fan Zhang:
>>>>>>>> This patch provides the initial implementation of the
>>>>>>>> scheduler poll mode driver using DPDK cryptodev framework.
>>>>>>>>
>>>>>>>> Scheduler PMD is used to schedule and enqueue the crypto ops
>>>>>>>> to the hardware and/or software crypto devices attached to
>>>>>>>> it (slaves). The dequeue operation from the slave(s), and
>>>>>>>> the possible dequeued crypto op reordering, are then carried
>> out by the scheduler.
>>>>>>>>
>>>>>>>> The scheduler PMD can be used to fill the throughput gap
>>>>>>>> between the physical core and the existing cryptodevs to
>>>>>>>> increase the overall performance. For example, if a physical
>>>>>>>> core has higher crypto op processing rate than a cryptodev,
>>>>>>>> the scheduler PMD can be introduced to attach more than one
>> cryptodevs.
>>>>>>>>
>>>>>>>> This initial implementation is limited to supporting the
>>>>>>>> following scheduling modes:
>>>>>>>>
>>>>>>>> - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst
>> attached software
>>>>>>>>     slave cryptodevs, to set this mode, the scheduler should
>> have been
>>>>>>>>     attached 1 or more software cryptodevs.
>>>>>>>>
>>>>>>>> - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst
>> attached hardware
>>>>>>>>     slave cryptodevs (QAT), to set this mode, the scheduler
>> should have
>>>>>>>>     been attached 1 or more QATs.
>>>>>>>
>>>>>>> Could it be implemented on top of the eventdev API?
>>>>>>>
>>>>>> Not really. The eventdev API is for different types of
>>>>>> scheduling between multiple sources that are all polling for
>>>>>> packets, compared to this, which is more analgous - as I
>>>>>> understand it - to the bonding PMD for ethdev.
>>>>>>
>>>>>> To make something like this work with an eventdev API you would
>>>>>> need to use one of the following models:
>>>>>> * have worker cores for offloading packets to the different crypto
>>>>>>   blocks pulling from the eventdev APIs. This would make it
>> difficult to
>>>>>>   do any "smart" scheduling of crypto operations between the
>> blocks,
>>>>>>   e.g. that one crypto instance may be better at certain types of
>>>>>>   operations than another.
>>>>>> * move the logic in this driver into an existing eventdev
>> instance,
>>>>>>   which uses the eventdev api rather than the crypto APIs and so
>> has an
>>>>>>   extra level of "structure abstraction" that has to be worked
>> though.
>>>>>>   It's just not really a good fit.
>>>>>>
>>>>>> So for this workload, I believe the pseudo-cryptodev instance is
>>>>>> the best way to go.
>>>>>>
>>>>>> /Bruce
>>>>>>
>>>>>
>>>>>
>>>>> As Bruce says this is much more analogous to the ethdev bonding
>>>>> driver, the main idea is to allow different crypto op scheduling
>>>>> mechanisms to be defined transparently to an application. This
>>>>> could be load-balancing across multiple hw crypto devices, or
>>>>> having a software crypto device to act as a backup device for a hw
>>>>> accelerator if it becomes oversubscribed. I think the main
>>>>> advantage of a crypto-scheduler approach means that the data path
>>>>> of the application doesn't need to have any knowledge that
>>>>> scheduling is happening at all, it is just using a different crypto
>> device id, which is then manages the distribution of crypto work.
>>>>>
>>>>>
>>>>>
>>>> This is a good deal like the bonding pmd, and so from a certain
>>>> standpoint it makes sense to do this, but whereas the bonding pmd is
>>>> meant to create a single path to a logical network over several
>>>> physical networks, this pmd really only focuses on maximizing
>>>> througput, and for that we already have tools.  As Thomas mentions,
>>>> there is the eventdev library, but from my view the distributor
>>>> library already fits this bill.  It already is a basic framework to
>>>> process mbufs in parallel according to whatever policy you want to
>> implement, which sounds like exactly what the goal of this pmd is.
>>>>
>>>> Neil
>>>>
>>>>
>>>
>>> Hey Neil,
>>>
>>> this is actually intended to act and look a good deal like the
>>> ethernet bonding device but to handling the crypto scheduling use cases.
>>>
>>> For example, take the case where multiple hw accelerators may be
>> available.
>>> We want to provide user applications with a mechanism to transparently
>>> balance work across all devices without having to manage the load
>>> balancing details or the guaranteeing of ordering of the processed ops
>>> on the dequeue_burst side. In this case the application would just use
>>> the crypto dev_id of the scheduler and it would look after balancing
>>> the workload across the available hw accelerators.
>>>
>>>
>>> +-------------------+
>>> |  Crypto Sch PMD   |
>>> |                   |
>>> | ORDERING / RR SCH |
>>> +-------------------+
>>>         ^ ^ ^
>>>         | | |
>>>       +-+ | +-------------------------------+
>>>       |   +---------------+                 |
>>>       |                   |                 |
>>>       V                   V                 V
>>> +---------------+ +---------------+ +---------------+
>>> | Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD |
>>> +---------------+ +---------------+ +---------------+
>>>
>>> Another use case we hope to support is migration of processing from
>>> one device to another where a hw and sw crypto pmd can be bound to the
>>> same crypto scheduler and the crypto processing could be
>>> transparently migrated from the hw to sw pmd. This would allow for hw
>>> accelerators to be hot-plugged attached/detached in a Guess VM
>>>
>>> +----------------+
>>> | Crypto Sch PMD |
>>> |                |
>>> | MIGRATION SCH  |
>>> +----------------+
>>>       | |
>>>       | +-----------------+
>>>       |                   |
>>>       V                   V
>>> +---------------+ +---------------+
>>> | Crypto HW PMD | | Crypto SW PMD |
>>> |   (Active)    | |   (Inactive)  |
>>> +---------------+ +---------------+
>>>
>>> The main point is that isn't envisaged as just a mechanism for
>>> scheduling crypto work loads across multiple cores, but a framework
>>> for allowing different scheduling mechanisms to be introduced, to
>>> handle different crypto scheduling problems, and done so in a way
>>> which  is completely transparent to the data path of an application.
>>> Like the eth bonding driver we want to support creating the crypto
>>> scheduler from EAL options, which allow specification of the
>>> scheduling mode and the crypto pmds which are to be bound to that crypto
>> scheduler.
>>>
>>>
>> I get what its for, that much is pretty clear.  But whereas the bonding
>> driver benefits from creating a single device interface for the purposes
>> of properly routing traffic through the network stack without exposing
>> that complexity to the using application, this pmd provides only
>> aggregation accoring to various policies.  This is exactly what the
>> distributor library was built for, and it seems like a re-invention of the
>> wheel to ignore that.  At the very least, you should implement this pmd on
>> top of the distributor library.  If that is impracitcal, then I somewhat
>> question why we have the distributor library at all.
>>
>> Neil
>>
>
> Hi Neil,
>
> The distributor library, and the eventdev framework are not the solution here, as, firstly, the crypto devices are not cores, in the same way that ethdev's are not cores, and the distributor library is for evenly distributing work among cores. Sure, some crypto implementations may be software only, but many aren't, and those that are software still appear as a device to software that must be used like they were a HW device. In the same way that to use distributor to load balance traffic between various TX ports is not a suitable solution - because you need to use cores to do the work "bridging" between the distributor/eventdev and the ethdev device, similarly here, if we distribute traffic using the distributor, you need cores to pull those packets from the distributor and offload them to the crypto devices. To use the distributor library in place of this vpmd, we'd need crypto devices which are aware of how to talk to the distributor, and use it's protocols for pushing/pulling packets, or else we are pulling in extra core cycles to do bridging work.
>
> Secondly, the distributor and eventdev libraries are designed for doing flow based (generally atomic) packet distribution. Load balancing between crypto devices is not generally based on flows, but rather on other factors like packet size, offload cost per device, etc. To distributor/eventdev, all workers are equal, but for working with devices, for crypto offload or nic transmission, that is plainly not the case. In short the distribution problems that are being solved by distributor and eventdev libraries are fundamentally different than those being solved by this vpmd. They would be the wrong tool for the job.
>
> I would agree with the previous statements that this driver is far closer in functionality to the bonded ethdev driver than anything else. It makes multiple devices appear as a single one while hiding the complexity of the multiple devices to the using application. In the same way as the bonded ethdev driver has different modes for active-backup, and for active-active for increased throughput, this vpmd for crypto can have the exact same modes - multiple active bonded devices for higher performance operation, or two devices in active backup to enable migration when using SR-IOV as described by Declan above.
>
> Regards,
> /Bruce
>

I think that having scheduler in the pmd name here may be somewhat of a 
loaded term and is muddying the waters of the problem we are trying to 
address and I think if we were to rename this to crypto_bond_pmd it may 
make our intent for what we want this pmd to achieve clearer.

Neil, in most of the initial scheduling use cases we want to address 
with this pmd initially, we are looking to schedule within the context 
of a single lcore on multiple hw accelerators or a mix of hw 
accelerators and sw pmds and therefore using the distributor or the 
eventdev wouldn't add a lot of value.

Declan

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd
  2016-12-07 16:04               ` Declan Doherty
@ 2016-12-08 14:57                 ` Neil Horman
  0 siblings, 0 replies; 42+ messages in thread
From: Neil Horman @ 2016-12-08 14:57 UTC (permalink / raw)
  To: Declan Doherty; +Cc: Richardson, Bruce, Thomas Monjalon, Zhang, Roy Fan, dev

On Wed, Dec 07, 2016 at 04:04:17PM +0000, Declan Doherty wrote:
> On 07/12/16 14:46, Richardson, Bruce wrote:
> > 
> > 
> > > -----Original Message-----
> > > From: Neil Horman [mailto:nhorman@tuxdriver.com]
> > > Sent: Wednesday, December 7, 2016 2:17 PM
> > > To: Doherty, Declan <declan.doherty@intel.com>
> > > Cc: Richardson, Bruce <bruce.richardson@intel.com>; Thomas Monjalon
> > > <thomas.monjalon@6wind.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> > > dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto
> > > pmd
> > > 
> > > On Wed, Dec 07, 2016 at 12:42:15PM +0000, Declan Doherty wrote:
> > > > On 05/12/16 15:12, Neil Horman wrote:
> > > > > On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote:
> > > > > > On 02/12/16 14:57, Bruce Richardson wrote:
> > > > > > > On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote:
> > > > > > > > 2016-12-02 14:15, Fan Zhang:
> > > > > > > > > This patch provides the initial implementation of the
> > > > > > > > > scheduler poll mode driver using DPDK cryptodev framework.
> > > > > > > > > 
> > > > > > > > > Scheduler PMD is used to schedule and enqueue the crypto ops
> > > > > > > > > to the hardware and/or software crypto devices attached to
> > > > > > > > > it (slaves). The dequeue operation from the slave(s), and
> > > > > > > > > the possible dequeued crypto op reordering, are then carried
> > > out by the scheduler.
> > > > > > > > > 
> > > > > > > > > The scheduler PMD can be used to fill the throughput gap
> > > > > > > > > between the physical core and the existing cryptodevs to
> > > > > > > > > increase the overall performance. For example, if a physical
> > > > > > > > > core has higher crypto op processing rate than a cryptodev,
> > > > > > > > > the scheduler PMD can be introduced to attach more than one
> > > cryptodevs.
> > > > > > > > > 
> > > > > > > > > This initial implementation is limited to supporting the
> > > > > > > > > following scheduling modes:
> > > > > > > > > 
> > > > > > > > > - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst
> > > attached software
> > > > > > > > >     slave cryptodevs, to set this mode, the scheduler should
> > > have been
> > > > > > > > >     attached 1 or more software cryptodevs.
> > > > > > > > > 
> > > > > > > > > - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst
> > > attached hardware
> > > > > > > > >     slave cryptodevs (QAT), to set this mode, the scheduler
> > > should have
> > > > > > > > >     been attached 1 or more QATs.
> > > > > > > > 
> > > > > > > > Could it be implemented on top of the eventdev API?
> > > > > > > > 
> > > > > > > Not really. The eventdev API is for different types of
> > > > > > > scheduling between multiple sources that are all polling for
> > > > > > > packets, compared to this, which is more analgous - as I
> > > > > > > understand it - to the bonding PMD for ethdev.
> > > > > > > 
> > > > > > > To make something like this work with an eventdev API you would
> > > > > > > need to use one of the following models:
> > > > > > > * have worker cores for offloading packets to the different crypto
> > > > > > >   blocks pulling from the eventdev APIs. This would make it
> > > difficult to
> > > > > > >   do any "smart" scheduling of crypto operations between the
> > > blocks,
> > > > > > >   e.g. that one crypto instance may be better at certain types of
> > > > > > >   operations than another.
> > > > > > > * move the logic in this driver into an existing eventdev
> > > instance,
> > > > > > >   which uses the eventdev api rather than the crypto APIs and so
> > > has an
> > > > > > >   extra level of "structure abstraction" that has to be worked
> > > though.
> > > > > > >   It's just not really a good fit.
> > > > > > > 
> > > > > > > So for this workload, I believe the pseudo-cryptodev instance is
> > > > > > > the best way to go.
> > > > > > > 
> > > > > > > /Bruce
> > > > > > > 
> > > > > > 
> > > > > > 
> > > > > > As Bruce says this is much more analogous to the ethdev bonding
> > > > > > driver, the main idea is to allow different crypto op scheduling
> > > > > > mechanisms to be defined transparently to an application. This
> > > > > > could be load-balancing across multiple hw crypto devices, or
> > > > > > having a software crypto device to act as a backup device for a hw
> > > > > > accelerator if it becomes oversubscribed. I think the main
> > > > > > advantage of a crypto-scheduler approach means that the data path
> > > > > > of the application doesn't need to have any knowledge that
> > > > > > scheduling is happening at all, it is just using a different crypto
> > > device id, which is then manages the distribution of crypto work.
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > This is a good deal like the bonding pmd, and so from a certain
> > > > > standpoint it makes sense to do this, but whereas the bonding pmd is
> > > > > meant to create a single path to a logical network over several
> > > > > physical networks, this pmd really only focuses on maximizing
> > > > > througput, and for that we already have tools.  As Thomas mentions,
> > > > > there is the eventdev library, but from my view the distributor
> > > > > library already fits this bill.  It already is a basic framework to
> > > > > process mbufs in parallel according to whatever policy you want to
> > > implement, which sounds like exactly what the goal of this pmd is.
> > > > > 
> > > > > Neil
> > > > > 
> > > > > 
> > > > 
> > > > Hey Neil,
> > > > 
> > > > this is actually intended to act and look a good deal like the
> > > > ethernet bonding device but to handling the crypto scheduling use cases.
> > > > 
> > > > For example, take the case where multiple hw accelerators may be
> > > available.
> > > > We want to provide user applications with a mechanism to transparently
> > > > balance work across all devices without having to manage the load
> > > > balancing details or the guaranteeing of ordering of the processed ops
> > > > on the dequeue_burst side. In this case the application would just use
> > > > the crypto dev_id of the scheduler and it would look after balancing
> > > > the workload across the available hw accelerators.
> > > > 
> > > > 
> > > > +-------------------+
> > > > |  Crypto Sch PMD   |
> > > > |                   |
> > > > | ORDERING / RR SCH |
> > > > +-------------------+
> > > >         ^ ^ ^
> > > >         | | |
> > > >       +-+ | +-------------------------------+
> > > >       |   +---------------+                 |
> > > >       |                   |                 |
> > > >       V                   V                 V
> > > > +---------------+ +---------------+ +---------------+
> > > > | Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD |
> > > > +---------------+ +---------------+ +---------------+
> > > > 
> > > > Another use case we hope to support is migration of processing from
> > > > one device to another where a hw and sw crypto pmd can be bound to the
> > > > same crypto scheduler and the crypto processing could be
> > > > transparently migrated from the hw to sw pmd. This would allow for hw
> > > > accelerators to be hot-plugged attached/detached in a Guess VM
> > > > 
> > > > +----------------+
> > > > | Crypto Sch PMD |
> > > > |                |
> > > > | MIGRATION SCH  |
> > > > +----------------+
> > > >       | |
> > > >       | +-----------------+
> > > >       |                   |
> > > >       V                   V
> > > > +---------------+ +---------------+
> > > > | Crypto HW PMD | | Crypto SW PMD |
> > > > |   (Active)    | |   (Inactive)  |
> > > > +---------------+ +---------------+
> > > > 
> > > > The main point is that isn't envisaged as just a mechanism for
> > > > scheduling crypto work loads across multiple cores, but a framework
> > > > for allowing different scheduling mechanisms to be introduced, to
> > > > handle different crypto scheduling problems, and done so in a way
> > > > which  is completely transparent to the data path of an application.
> > > > Like the eth bonding driver we want to support creating the crypto
> > > > scheduler from EAL options, which allow specification of the
> > > > scheduling mode and the crypto pmds which are to be bound to that crypto
> > > scheduler.
> > > > 
> > > > 
> > > I get what its for, that much is pretty clear.  But whereas the bonding
> > > driver benefits from creating a single device interface for the purposes
> > > of properly routing traffic through the network stack without exposing
> > > that complexity to the using application, this pmd provides only
> > > aggregation accoring to various policies.  This is exactly what the
> > > distributor library was built for, and it seems like a re-invention of the
> > > wheel to ignore that.  At the very least, you should implement this pmd on
> > > top of the distributor library.  If that is impracitcal, then I somewhat
> > > question why we have the distributor library at all.
> > > 
> > > Neil
> > > 
> > 
> > Hi Neil,
> > 
> > The distributor library, and the eventdev framework are not the solution here, as, firstly, the crypto devices are not cores, in the same way that ethdev's are not cores, and the distributor library is for evenly distributing work among cores. Sure, some crypto implementations may be software only, but many aren't, and those that are software still appear as a device to software that must be used like they were a HW device. In the same way that to use distributor to load balance traffic between various TX ports is not a suitable solution - because you need to use cores to do the work "bridging" between the distributor/eventdev and the ethdev device, similarly here, if we distribute traffic using the distributor, you need cores to pull those packets from the distributor and offload them to the crypto devices. To use the distributor library in place of this vpmd, we'd need crypto devices which are aware of how to talk to the distributor, and use it's protocols for pushing/pulling packets, or else we are pulling in extra core cycles to do bridging work.
> > 
> > Secondly, the distributor and eventdev libraries are designed for doing flow based (generally atomic) packet distribution. Load balancing between crypto devices is not generally based on flows, but rather on other factors like packet size, offload cost per device, etc. To distributor/eventdev, all workers are equal, but for working with devices, for crypto offload or nic transmission, that is plainly not the case. In short the distribution problems that are being solved by distributor and eventdev libraries are fundamentally different than those being solved by this vpmd. They would be the wrong tool for the job.
> > 
> > I would agree with the previous statements that this driver is far closer in functionality to the bonded ethdev driver than anything else. It makes multiple devices appear as a single one while hiding the complexity of the multiple devices to the using application. In the same way as the bonded ethdev driver has different modes for active-backup, and for active-active for increased throughput, this vpmd for crypto can have the exact same modes - multiple active bonded devices for higher performance operation, or two devices in active backup to enable migration when using SR-IOV as described by Declan above.
> > 
> > Regards,
> > /Bruce
> > 
> 
> I think that having scheduler in the pmd name here may be somewhat of a
> loaded term and is muddying the waters of the problem we are trying to
> address and I think if we were to rename this to crypto_bond_pmd it may make
> our intent for what we want this pmd to achieve clearer.
> 
> Neil, in most of the initial scheduling use cases we want to address with
> this pmd initially, we are looking to schedule within the context of a
> single lcore on multiple hw accelerators or a mix of hw accelerators and sw
> pmds and therefore using the distributor or the eventdev wouldn't add a lot
> of value.
> 
> Declan

Ok, these are fair points, and I'll concede to them.  That said, it still seems
like a waste to me to ignore the 80% functionality overlap to be had here.  That
is to say, the distributor library does alot of work that both this pmd and the
bonding pmd could benefit from.  Perhaps its worth looking at how to enhance the
distributor library such that worker tasks can be affined to a single cpu, and
the worker assignment can be used as indexed device assignment (the idea being
that a single worker task might represent multiple worker ids in the distributor
library).  that way such a crypto aggregator pmd or the bonding pmd's
implementation is little more than setting tags in mbufs accoring to appropriate
policy.

Neil

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v2] Scheduler: add driver for scheduler crypto pmd
  2016-12-02 14:15 [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd Fan Zhang
  2016-12-02 14:31 ` Thomas Monjalon
@ 2017-01-03 17:08 ` Fan Zhang
  2017-01-03 17:16 ` [dpdk-dev] [PATCH v3] " Fan Zhang
  2 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-03 17:08 UTC (permalink / raw)
  To: dev; +Cc: pablo.de.lara.guarch, declan.doherty, roy.fan.zhang

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

As the initial version, the scheduler PMD currently supports only the
Round-robin mode, which distributes the enqueued burst of crypto ops
among its slaves in a round-robin manner. This mode may help to fill
the throughput gap between the physical core and the existing cryptodevs
to increase the overall performance. Moreover, the scheduler PMD is
provided the APIs for user to create his/her own scheduler.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
- Scheduler PMD shares same EAL commandline options as other cryptodevs.
  However, apart from socket_id, the rest of cryptodev options are
  ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
  options are set as the minimum values of the attached slaves'. For
  example, a scheduler cryptodev is attached 2 cryptodevs with
  max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
  max_nb_queue_pairs will be automatically updated as 2.

- The scheduler cryptodev cannot be started unless the scheduling mode
  is set and at least one slave is attached. Also, to configure the
  scheduler in the run-time, like attach/detach slave(s), change
  scheduling mode, or enable/disable crypto op ordering, one should stop
  the scheduler first, otherwise an error will be returned.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_base                                 |  10 +-
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/scheduler/Makefile                  |  67 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 598 +++++++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 183 +++++++
 .../scheduler/rte_cryptodev_scheduler_ioctls.h     |  92 ++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 168 ++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 495 +++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 122 +++++
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 419 +++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   4 +
 mk/rte.app.mk                                      |   3 +-
 14 files changed, 2242 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

diff --git a/config/common_base b/config/common_base
index 4bff83a..a3783a6 100644
--- a/config/common_base
+++ b/config/common_base
@@ -358,7 +358,7 @@ CONFIG_RTE_CRYPTODEV_NAME_LEN=64
 #
 # Compile PMD for QuickAssist based devices
 #
-CONFIG_RTE_LIBRTE_PMD_QAT=n
+CONFIG_RTE_LIBRTE_PMD_QAT=y
 CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_INIT=n
 CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_RX=n
@@ -372,7 +372,7 @@ CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
 #
 # Compile PMD for AESNI backed device
 #
-CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
+CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y
 CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
 
 #
@@ -400,6 +400,12 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for ZUC device
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..cdd3c94 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..976a565
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler_ioctls.h
+SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..d2d068c
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,598 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_jhash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_malloc.h>
+
+#include "scheduler_pmd_private.h"
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static uint32_t
+sync_caps(struct rte_cryptodev_capabilities *caps,
+		uint32_t nb_caps,
+		const struct rte_cryptodev_capabilities *slave_caps)
+{
+	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
+	uint32_t i;
+
+	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_slave_caps++;
+
+	if (nb_caps == 0) {
+		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
+		return nb_slave_caps;
+	}
+
+	for (i = 0; i < sync_nb_caps; i++) {
+		struct rte_cryptodev_capabilities *cap = &caps[i];
+		uint32_t j;
+
+		for (j = 0; j < nb_slave_caps; j++) {
+			const struct rte_cryptodev_capabilities *s_cap =
+					&slave_caps[i];
+
+			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
+					cap->sym.xform_type)
+				continue;
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (s_cap->sym.auth.algo !=
+						cap->sym.auth.algo)
+					continue;
+
+				cap->sym.auth.digest_size.min =
+					s_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					s_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					s_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					s_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+			}
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (s_cap->sym.cipher.algo !=
+						cap->sym.cipher.algo)
+					continue;
+
+			/* no common cap found */
+			break;
+		}
+
+		if (j < nb_slave_caps)
+			continue;
+
+		/* remove a uncommon cap from the array */
+		for (j = i; j < sync_nb_caps - 1; j++)
+			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
+
+		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
+		sync_nb_caps--;
+	}
+
+	return sync_nb_caps;
+}
+
+static int
+update_scheduler_capability(struct scheduler_ctx *sched_ctx)
+{
+	struct rte_cryptodev_capabilities tmp_caps[256] = {0};
+	uint32_t nb_caps = 0, i;
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
+		if (nb_caps == 0)
+			return -1;
+	}
+
+	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
+			sizeof(struct rte_cryptodev_capabilities) *
+			(nb_caps + 1), 0, SOCKET_ID_ANY);
+	if (!sched_ctx->capabilities)
+		return -ENOMEM;
+
+	rte_memcpy(sched_ctx->capabilities, tmp_caps,
+			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
+
+	return 0;
+}
+
+static void
+update_scheduler_feature_flag(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	dev->feature_flags = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		dev->feature_flags |= dev_info.feature_flags;
+	}
+}
+
+static void
+update_max_nb_qp(struct scheduler_ctx *sched_ctx)
+{
+	uint32_t i;
+	uint32_t max_nb_qp;
+
+	if (!sched_ctx->nb_slaves)
+		return;
+
+	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
+				dev_info.max_nb_queue_pairs : max_nb_qp;
+	}
+
+	sched_ctx->max_nb_queue_pairs = max_nb_qp;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	struct scheduler_slave *slave;
+	struct rte_cryptodev_info dev_info;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("Too many slaves attached");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++)
+		if (sched_ctx->slaves[i].dev_id == slave_id) {
+			CS_LOG_ERR("Slave already added");
+			return -ENOTSUP;
+		}
+
+	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
+
+	rte_cryptodev_info_get(slave_id, &dev_info);
+
+	slave->dev_id = slave_id;
+	slave->dev_type = dev_info.dev_type;
+	sched_ctx->nb_slaves++;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		slave->dev_id = 0;
+		slave->dev_type = 0;
+		sched_ctx->nb_slaves--;
+
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i, slave_pos;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
+		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
+			break;
+	if (slave_pos == sched_ctx->nb_slaves) {
+		CS_LOG_ERR("Cannot find slave");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
+		CS_LOG_ERR("Failed to detach slave");
+		return -ENOTSUP;
+	}
+
+	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
+		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
+				sizeof(struct scheduler_slave));
+	}
+	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
+			sizeof(struct scheduler_slave));
+	sched_ctx->nb_slaves--;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	int ret;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (mode == sched_ctx->mode && mode != CDEV_SCHED_MODE_USERDEFINED)
+		return 0;
+
+	switch (mode) {
+	case CDEV_SCHED_MODE_ROUNDROBIN:
+		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
+				roundrobin_scheduler) < 0) {
+			CS_LOG_ERR("Failed to load scheduler");
+			return -1;
+		}
+		break;
+	case CDEV_SCHED_MODE_MIGRATION:
+	case CDEV_SCHED_MODE_FALLBACK:
+	default:
+		CS_LOG_ERR("Not yet supported");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	ret = (*sched_ctx->ops.create_private_ctx)(dev);
+	if (ret < 0) {
+		CS_LOG_ERR("Unable to create scheduler private context");
+		return ret;
+	}
+
+	sched_ctx->mode = mode;
+
+	return 0;
+}
+
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->mode;
+}
+
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	sched_ctx->reordering_enabled = enable_reorder;
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return (int)sched_ctx->reordering_enabled;
+}
+
+int
+rte_cryptodev_scheduler_ioctl(uint8_t scheduler_id, uint16_t ioctl_id,
+		void *ioctl_param) {
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (ioctl_id >= sched_ctx->ioctl_count) {
+		CS_LOG_ERR("Invalid IOCTL ID");
+		return -EINVAL;
+	}
+
+	return (*(sched_ctx->ioctls[ioctl_id]->ioctl))(ioctl_param);
+}
+
+int
+rte_cryptodev_scheduler_ioctl_count(uint8_t scheduler_id) {
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->ioctl_count;
+}
+
+int
+rte_cryptodev_scheduler_ioctl_list(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler_ioctl_description **ioctls_desc,
+		uint16_t nb_ioctls)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (nb_ioctls > sched_ctx->ioctl_count) {
+		CS_LOG_ERR("Invalid IOCTL number");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nb_ioctls; i++) {
+		ioctls_desc[i]->request_id = sched_ctx->ioctls[i]->id;
+		ioctls_desc[i]->name = sched_ctx->ioctls[i]->name;
+		ioctls_desc[i]->description = sched_ctx->ioctls[i]->description;
+	}
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler) {
+
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	size_t size;
+
+	/* check device stopped */
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Device should be stopped before loading scheduler");
+		return -EBUSY;
+	}
+
+	strncpy(sched_ctx->name, scheduler->name,
+			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+	strncpy(sched_ctx->description, scheduler->description,
+			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+
+	/* load scheduler instance ioctls */
+	if (sched_ctx->ioctls)
+		rte_free(sched_ctx->ioctls);
+	if (scheduler->nb_ioctls) {
+		size = sizeof(struct rte_cryptodev_scheduler_ioctl) *
+				scheduler->nb_ioctls;
+		sched_ctx->ioctls = rte_zmalloc_socket(NULL, size, 0,
+				SOCKET_ID_ANY);
+		if (!sched_ctx->ioctls) {
+			CS_LOG_ERR("Failed to allocate memory");
+			return -ENOMEM;
+		}
+	}
+
+
+	for (i = 0; i < scheduler->nb_ioctls; i++) {
+		struct rte_cryptodev_scheduler_ioctl *ioctl =
+				sched_ctx->ioctls[scheduler->ioctls[i]->id];
+
+		strncpy(ioctl->name, scheduler->ioctls[i]->name,
+				RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+		strncpy(ioctl->description, scheduler->ioctls[i]->description,
+				RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+		ioctl->ioctl = scheduler->ioctls[i]->ioctl;
+	}
+
+	sched_ctx->ioctl_count = scheduler->nb_ioctls;
+
+	/* load scheduler instance options */
+	if (sched_ctx->options)
+		rte_free(sched_ctx->options);
+	if (scheduler->nb_options) {
+		size = sizeof(struct rte_cryptodev_scheduler_option) *
+				scheduler->nb_options;
+		sched_ctx->options = rte_zmalloc_socket(NULL, size, 0,
+				SOCKET_ID_ANY);
+		if (!sched_ctx->options) {
+			CS_LOG_ERR("Failed to allocate memory");
+			return -ENOMEM;
+		}
+	}
+
+	for (i = 0; i < scheduler->nb_options; i++) {
+		struct rte_cryptodev_scheduler_option *option =
+				sched_ctx->options[i];
+
+		strncpy(option->name, scheduler->options[i]->name,
+				RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+		strncpy(option->description, scheduler->options[i]->description,
+				RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+		option->option_parser = scheduler->options[i]->option_parser;
+	}
+	sched_ctx->nb_options = scheduler->nb_options;
+
+	/* load scheduler instance operations functions */
+	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
+	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
+	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
+	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
+	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
+	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
+
+	return 0;
+}
+
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..ee5eeb4
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,183 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#include <rte_cryptodev_scheduler_ioctls.h>
+#include <rte_cryptodev_scheduler_operations.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum rte_cryptodev_scheduler_mode {
+	CDEV_SCHED_MODE_NOT_SET = 0,
+	CDEV_SCHED_MODE_USERDEFINED,
+	CDEV_SCHED_MODE_ROUNDROBIN,
+	CDEV_SCHED_MODE_MIGRATION,
+	CDEV_SCHED_MODE_FALLBACK,
+	CDEV_SCHED_MODE_MULTICORE,
+
+	CDEV_SCHED_MODE_COUNT /* number of modes */
+};
+
+#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
+
+struct rte_cryptodev_scheduler;
+
+/**
+ * Load a user defined scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		scheduler	Pointer to the user defined scheduler
+ *
+ * @return
+ *	0 if loading successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler);
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Detach a attached crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be detached
+ *
+ * @return
+ *	0 if detaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
+
+/**
+ * Set the crypto ops reordering feature on/off
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		enable_reorder	set the crypto op reordering feature
+ *				0: disable reordering
+ *				1: enable reordering
+ *
+ * @return
+ *	0 if setting successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder);
+
+/**
+ * Get the current crypto ops reordering feature
+ *
+ * @param	dev_id		The target scheduler device ID
+ *
+ * @return
+ *	0 if reordering is disabled
+ *	1 if reordering is enabled
+ *	negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
+
+typedef int (*rte_cryptodev_scheduler_option_parser)(
+		const char *key, const char *value, void *extra_args);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct rte_cryptodev_scheduler_option {
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+
+	rte_cryptodev_scheduler_option_parser option_parser;
+};
+
+struct rte_cryptodev_scheduler {
+	const char *name;
+	const char *description;
+	struct rte_cryptodev_scheduler_option **options;
+	unsigned nb_options;
+
+	struct rte_cryptodev_scheduler_ioctl **ioctls;
+	unsigned nb_ioctls;
+
+	struct rte_cryptodev_scheduler_ops *ops;
+};
+
+extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h
new file mode 100644
index 0000000..c19a9d3
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h
@@ -0,0 +1,92 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _RTE_CRYPTODEV_SCHEDULER_IOCTLS
+#define _RTE_CRYPTODEV_SCHEDULER_IOCTLS
+
+#include <rte_cryptodev_scheduler.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_CRYPTODEV_SCHEDULER_IOCTL_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_IOCTL_DESC_MAX_LEN	(256)
+
+#define RTE_CRYPTODEV_SCHEDULER_MAX_NB_IOCTLS	(8)
+
+#define CDEV_SCHED_IOCTL_LIVE_MIGRATION_SCHED_STATE_GET		(1)
+#define CDEV_SCHED_IOCTL_LIVE_MIGRATION_SCHED_MIGRATE		(2)
+#define CDEV_SCHED_IOCTL_FALLBACK_SCHED_SET_PRIMARY		(3)
+
+struct ioctl_migration_scheduler_state_get {
+	uint8_t slave_id;
+	/**< Active crypto device id */
+	enum migration_scheduler_state {
+		MIGRATION_SCHEDULER_STATE_ACTIVE,
+		MIGRATION_SCHEDULER_STATE_AWAITING_MIGRATE,
+		MIGRATION_SCHEDULER_STATE_MIGRATION
+	} state;
+	/**< Migration Scheduler State */
+};
+
+int
+rte_cryptodev_scheduler_ioctl(uint8_t scheduler_id, uint16_t request_id,
+		void *request_params);
+
+int
+rte_cryptodev_scheduler_ioctl_count(uint8_t scheduler_id);
+
+struct rte_cryptodev_scheduler_ioctl_description {
+	uint16_t request_id;
+	const char *name;
+	const char *description;
+};
+
+int
+rte_cryptodev_scheduler_ioctl_list(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler_ioctl_description **ioctls_desc,
+		uint16_t nb_ioctls);
+
+typedef int (*rte_cryptodev_scheduler_ioctl_fn)(void *request_params);
+
+struct rte_cryptodev_scheduler_ioctl {
+	int id;
+	char name[RTE_CRYPTODEV_SCHEDULER_IOCTL_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_IOCTL_DESC_MAX_LEN];
+
+	rte_cryptodev_scheduler_ioctl_fn ioctl;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTODEV_SCHEDULER_IOCTLS */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
new file mode 100644
index 0000000..ab8595b
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
@@ -0,0 +1,71 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+
+typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
+typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
+
+typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
+		struct rte_cryptodev *dev, uint16_t qp_id);
+
+typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
+		struct rte_cryptodev *dev);
+
+struct rte_cryptodev_scheduler_ops {
+	rte_cryptodev_scheduler_slave_attach_t slave_attach;
+	rte_cryptodev_scheduler_slave_attach_t slave_detach;
+
+	rte_cryptodev_scheduler_start_t scheduler_start;
+	rte_cryptodev_scheduler_stop_t scheduler_stop;
+
+	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
+
+	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..0510f68
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,12 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_load_user_scheduler;
+	rte_cryptodev_scheduler_slave_attach;
+	rte_cryptodev_scheduler_slave_detach;
+	rte_crpytodev_scheduler_mode_set;
+	rte_crpytodev_scheduler_mode_get;
+	rte_cryptodev_scheduler_ordering_set;
+	rte_cryptodev_scheduler_ordering_get;
+
+} DPDK_17.02;
\ No newline at end of file
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..0c13b55
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,168 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+static uint16_t
+scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint16_t
+scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint32_t unique_name_id;
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct rte_crypto_vdev_init_params *init_params)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (snprintf(crypto_dev_name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%u",
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD), unique_name_id++) < 0) {
+		CS_LOG_ERR("driver %s: failed to create unique cryptodev "
+			"name", name);
+		return -EFAULT;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_ctx),
+			init_params->socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->enqueue_burst = scheduler_enqueue_burst;
+	dev->dequeue_burst = scheduler_dequeue_burst;
+
+	sched_ctx = dev->data->dev_private;
+	sched_ctx->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	sched_ctx = dev->data->dev_private;
+
+	if (sched_ctx->nb_slaves) {
+		uint32_t i;
+
+		for (i = 0; i < sched_ctx->nb_slaves; i++)
+			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
+					sched_ctx->slaves[i].dev_id);
+	}
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct rte_crypto_vdev_init_params init_params = {
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+		rte_socket_id()
+	};
+
+	rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.max_nb_sessions);
+
+	return cryptodev_scheduler_create(name, &init_params);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int>");
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..972a355
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,495 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+	int ret = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static int
+update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (sched_ctx->reordering_enabled) {
+		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (!buff_size)
+			return 0;
+
+		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+			dev->data->dev_id, qp_id) < 0) {
+			CS_LOG_ERR("failed to create unique reorder buffer "
+					"name");
+			return -ENOMEM;
+		}
+
+		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
+				rte_socket_id(), buff_size);
+		if (!qp_ctx->reorder_buf) {
+			CS_LOG_ERR("failed to create reorder buffer");
+			return -ENOMEM;
+		}
+	} else {
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+	}
+
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = update_reorder_buff(dev, i);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to update reorder buffer");
+			return ret;
+		}
+	}
+
+	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
+		CS_LOG_ERR("Scheduler mode is not set");
+		return -1;
+	}
+
+	if (!sched_ctx->nb_slaves) {
+		CS_LOG_ERR("No slave in the scheduler");
+		return -1;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
+			CS_LOG_ERR("Failed to attach slave");
+			return -ENOTSUP;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
+
+	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
+		CS_LOG_ERR("Scheduler start failed");
+		return -1;
+	}
+
+	/* start all slaves */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to start slave dev %u",
+					slave_dev_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+
+	if (!dev->data->dev_started)
+		return;
+
+	/* stop all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->dev_stop)(slave_dev);
+	}
+
+	if (*sched_ctx->ops.scheduler_stop)
+		(*sched_ctx->ops.scheduler_stop)(dev);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if (*sched_ctx->ops.slave_detach)
+			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
+	}
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+	int ret;
+
+	/* the dev should be stopped before being closed */
+	if (dev->data->dev_started)
+		return -EBUSY;
+
+	/* close all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
+		if (ret < 0)
+			return ret;
+	}
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (qp_ctx->private_qp_ctx) {
+			rte_free(qp_ctx->private_qp_ctx);
+			qp_ctx->private_qp_ctx = NULL;
+		}
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	if (sched_ctx->ioctls)
+		rte_free(sched_ctx->ioctls);
+
+	if (sched_ctx->options)
+		rte_free(sched_ctx->options);
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+		struct rte_cryptodev_stats slave_stats = {0};
+
+		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
+
+		stats->enqueued_count += slave_stats.enqueued_count;
+		stats->dequeued_count += slave_stats.dequeued_count;
+
+		stats->enqueue_err_count += slave_stats.enqueue_err_count;
+		stats->dequeue_err_count += slave_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->stats_reset)(slave_dev);
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned max_nb_sessions = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+	unsigned i;
+
+	if (!dev_info)
+		return;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev_info slave_info;
+
+		rte_cryptodev_info_get(slave_dev_id, &slave_info);
+		max_nb_sessions = slave_info.sym.max_nb_sessions <
+				max_nb_sessions ?
+				slave_info.sym.max_nb_sessions :
+				max_nb_sessions;
+	}
+
+	dev_info->dev_type = dev->dev_type;
+	dev_info->feature_flags = dev->feature_flags;
+	dev_info->capabilities = sched_ctx->capabilities;
+	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
+	dev_info->sym.max_nb_sessions = max_nb_sessions;
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (!qp_ctx)
+		return 0;
+
+	if (qp_ctx->reorder_buf)
+		rte_reorder_free(qp_ctx->reorder_buf);
+	if (qp_ctx->private_qp_ctx)
+		rte_free(qp_ctx->private_qp_ctx);
+
+	rte_free(qp_ctx);
+	dev->data->queue_pairs[qp_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx;
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"CRYTO_SCHE PMD %u QP %u",
+			dev->data->dev_id, qp_id) < 0) {
+		CS_LOG_ERR("Failed to create unique queue pair name");
+		return -EFAULT;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
+			socket_id);
+	if (qp_ctx == NULL)
+		return -ENOMEM;
+
+	dev->data->queue_pairs[qp_id] = qp_ctx;
+
+	if (*sched_ctx->ops.config_queue_pair) {
+		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
+			CS_LOG_ERR("Unable to configure queue pair");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static unsigned
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sess(struct scheduler_ctx *sched_ctx,
+		struct rte_crypto_sym_xform *xform,
+		struct scheduler_session *sess,
+		uint32_t create)
+{
+	unsigned i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct scheduler_slave *slave = &sched_ctx->slaves[i];
+		struct rte_cryptodev *dev = &rte_cryptodev_globals->
+				devs[slave->dev_id];
+
+		if (sess->sessions[i]) {
+			if (create)
+				continue;
+			/* !create */
+			(*dev->dev_ops->session_clear)(dev,
+					(void *)sess->sessions[i]);
+			sess->sessions[i] = NULL;
+		} else {
+			if (!create)
+				continue;
+			/* create */
+			sess->sessions[i] =
+					rte_cryptodev_sym_session_create(
+							slave->dev_id, xform);
+			if (!sess->sessions[i]) {
+				config_slave_sess(sched_ctx, NULL, sess, 0);
+				return -1;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	config_slave_sess(sched_ctx, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..550fdcc
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,122 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_cryptodev_scheduler_ioctls.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+struct scheduler_slave {
+	uint8_t dev_id;
+	uint16_t qp_id;
+	uint32_t nb_inflight_cops;
+
+	enum rte_cryptodev_type dev_type;
+};
+
+struct scheduler_ctx {
+	void *private_ctx;
+	/**< private scheduler context pointer */
+
+	struct rte_cryptodev_capabilities *capabilities;
+	unsigned nb_capabilities;
+
+	unsigned max_nb_queue_pairs;
+
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	unsigned nb_slaves;
+
+	enum rte_cryptodev_scheduler_mode mode;
+
+	uint32_t ioctl_count;
+	struct rte_cryptodev_scheduler_ioctl **ioctls;
+
+	uint32_t nb_options;
+	struct rte_cryptodev_scheduler_option **options;
+
+	struct rte_cryptodev_scheduler_ops ops;
+
+	uint8_t reordering_enabled;
+
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+} __rte_cache_aligned;
+
+struct scheduler_qp_ctx {
+	void *private_qp_ctx;
+
+	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
+	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
+
+	struct rte_reorder_buffer *reorder_buf;
+	uint32_t seqn;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
new file mode 100644
index 0000000..be0b7fd
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -0,0 +1,419 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_scheduler_operations.h>
+
+#include "scheduler_pmd_private.h"
+
+struct roundround_scheduler_ctx {
+};
+
+struct rr_scheduler_qp_ctx {
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	unsigned nb_slaves;
+
+	unsigned last_enq_slave_idx;
+	unsigned last_deq_slave_idx;
+};
+
+static uint16_t
+schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+static uint16_t
+schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			gen_qp_ctx->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++) {
+		rte_prefetch0(ops[i]->sym->session);
+		rte_prefetch0(ops[i]->sym->m_src);
+	}
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 4]->sym->m_src);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->m_src);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->m_src);
+		rte_prefetch0(ops[i + 7]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+
+static uint16_t
+schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	struct scheduler_slave *slave;
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t nb_deq_ops;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	last_slave_idx += 1;
+	if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+		last_slave_idx = 0;
+
+	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+static uint16_t
+schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
+	struct scheduler_slave *slave;
+	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint16_t nb_deq_ops, nb_drained_mbufs;
+	const uint16_t nb_op_ops = nb_ops;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t i;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, op_ops, nb_ops);
+
+	rr_qp_ctx->last_deq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_deq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; i < nb_deq_ops - 8; i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_memcpy(mbuf1->buf_addr, &op_ops[i+1], sizeof(op_ops[i+1]));
+		rte_memcpy(mbuf2->buf_addr, &op_ops[i+2], sizeof(op_ops[i+2]));
+		rte_memcpy(mbuf3->buf_addr, &op_ops[i+3], sizeof(op_ops[i+3]));
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
+			nb_ops);
+	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; i < nb_drained_mbufs - 8; i += 4) {
+		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->buf_addr;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->buf_addr;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->buf_addr;
+
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 1]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 2]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 3]->buf_addr = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_drained_mbufs; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->buf_addr;
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+	}
+
+	return nb_drained_mbufs;
+}
+
+static int
+slave_attach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+slave_detach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+scheduler_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+		struct rr_scheduler_qp_ctx *rr_qp_ctx =
+				qp_ctx->private_qp_ctx;
+		uint32_t j;
+		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
+
+		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
+				sizeof(struct scheduler_slave));
+		for (j = 0; j < sched_ctx->nb_slaves; j++) {
+			rr_qp_ctx->slaves[j].dev_id =
+					sched_ctx->slaves[i].dev_id;
+			rr_qp_ctx->slaves[j].qp_id = qp_id;
+		}
+
+		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
+
+		rr_qp_ctx->last_enq_slave_idx = 0;
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+		if (sched_ctx->reordering_enabled) {
+			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
+			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
+		} else {
+			qp_ctx->schedule_enqueue = &schedule_enqueue;
+			qp_ctx->schedule_dequeue = &schedule_dequeue;
+		}
+	}
+
+	return 0;
+}
+
+static int
+scheduler_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+static int
+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+	struct rr_scheduler_qp_ctx *rr_qp_ctx;
+
+	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
+			rte_socket_id());
+	if (!rr_qp_ctx) {
+		CS_LOG_ERR("failed allocate memory for private queue pair");
+		return -ENOMEM;
+	}
+
+	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
+
+	return 0;
+}
+
+static int
+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+struct rte_cryptodev_scheduler_ops ops = {
+	slave_attach,
+	slave_detach,
+	scheduler_start,
+	scheduler_stop,
+	scheduler_config_qp,
+	scheduler_create_private_ctx
+};
+
+struct rte_cryptodev_scheduler scheduler = {
+		.name = "roundrobin-scheduler",
+		.description = "scheduler which will round robin burst across "
+				"slave crypto devices",
+		.options = NULL,
+		.ops = &ops,
+		.ioctls = NULL
+};
+
+
+struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 8f63e8f..61a3ce0 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,7 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +78,9 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+
+	RTE_CRYPTODEV_TYPE_COUNT
 };
 
 extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..ee34688 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -98,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
@@ -145,6 +145,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER)  += -lrte_pmd_crypto_scheduler
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v3] Scheduler: add driver for scheduler crypto pmd
  2016-12-02 14:15 [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd Fan Zhang
  2016-12-02 14:31 ` Thomas Monjalon
  2017-01-03 17:08 ` [dpdk-dev] [PATCH v2] " Fan Zhang
@ 2017-01-03 17:16 ` Fan Zhang
  2017-01-17 10:57   ` [dpdk-dev] [PATCH v4] " Fan Zhang
  2 siblings, 1 reply; 42+ messages in thread
From: Fan Zhang @ 2017-01-03 17:16 UTC (permalink / raw)
  To: dev; +Cc: pablo.de.lara.guarch, declan.doherty, roy.fan.zhang

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

As the initial version, the scheduler PMD currently supports only the
Round-robin mode, which distributes the enqueued burst of crypto ops
among its slaves in a round-robin manner. This mode may help to fill
the throughput gap between the physical core and the existing cryptodevs
to increase the overall performance. Moreover, the scheduler PMD is
provided the APIs for user to create his/her own scheduler.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
- Scheduler PMD shares same EAL commandline options as other cryptodevs.
  However, apart from socket_id, the rest of cryptodev options are
  ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
  options are set as the minimum values of the attached slaves'. For
  example, a scheduler cryptodev is attached 2 cryptodevs with
  max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
  max_nb_queue_pairs will be automatically updated as 2.

- The scheduler cryptodev cannot be started unless the scheduling mode
  is set and at least one slave is attached. Also, to configure the
  scheduler in the run-time, like attach/detach slave(s), change
  scheduling mode, or enable/disable crypto op ordering, one should stop
  the scheduler first, otherwise an error will be returned.

Changes in v3:
Fixed config/common_base.

Changes in v2:
New approaches in API to suit future scheduling modes.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_base                                 |   6 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/scheduler/Makefile                  |  67 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 598 +++++++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 183 +++++++
 .../scheduler/rte_cryptodev_scheduler_ioctls.h     |  92 ++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 168 ++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 495 +++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 122 +++++
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 419 +++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   4 +
 mk/rte.app.mk                                      |   3 +-
 14 files changed, 2240 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

diff --git a/config/common_base b/config/common_base
index 4bff83a..79d120d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -400,6 +400,12 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for ZUC device
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..cdd3c94 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..976a565
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,67 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler_ioctls.h
+SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..d2d068c
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,598 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_jhash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_malloc.h>
+
+#include "scheduler_pmd_private.h"
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static uint32_t
+sync_caps(struct rte_cryptodev_capabilities *caps,
+		uint32_t nb_caps,
+		const struct rte_cryptodev_capabilities *slave_caps)
+{
+	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
+	uint32_t i;
+
+	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_slave_caps++;
+
+	if (nb_caps == 0) {
+		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
+		return nb_slave_caps;
+	}
+
+	for (i = 0; i < sync_nb_caps; i++) {
+		struct rte_cryptodev_capabilities *cap = &caps[i];
+		uint32_t j;
+
+		for (j = 0; j < nb_slave_caps; j++) {
+			const struct rte_cryptodev_capabilities *s_cap =
+					&slave_caps[i];
+
+			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
+					cap->sym.xform_type)
+				continue;
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (s_cap->sym.auth.algo !=
+						cap->sym.auth.algo)
+					continue;
+
+				cap->sym.auth.digest_size.min =
+					s_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					s_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					s_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					s_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+			}
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (s_cap->sym.cipher.algo !=
+						cap->sym.cipher.algo)
+					continue;
+
+			/* no common cap found */
+			break;
+		}
+
+		if (j < nb_slave_caps)
+			continue;
+
+		/* remove a uncommon cap from the array */
+		for (j = i; j < sync_nb_caps - 1; j++)
+			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
+
+		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
+		sync_nb_caps--;
+	}
+
+	return sync_nb_caps;
+}
+
+static int
+update_scheduler_capability(struct scheduler_ctx *sched_ctx)
+{
+	struct rte_cryptodev_capabilities tmp_caps[256] = {0};
+	uint32_t nb_caps = 0, i;
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
+		if (nb_caps == 0)
+			return -1;
+	}
+
+	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
+			sizeof(struct rte_cryptodev_capabilities) *
+			(nb_caps + 1), 0, SOCKET_ID_ANY);
+	if (!sched_ctx->capabilities)
+		return -ENOMEM;
+
+	rte_memcpy(sched_ctx->capabilities, tmp_caps,
+			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
+
+	return 0;
+}
+
+static void
+update_scheduler_feature_flag(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	dev->feature_flags = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		dev->feature_flags |= dev_info.feature_flags;
+	}
+}
+
+static void
+update_max_nb_qp(struct scheduler_ctx *sched_ctx)
+{
+	uint32_t i;
+	uint32_t max_nb_qp;
+
+	if (!sched_ctx->nb_slaves)
+		return;
+
+	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
+				dev_info.max_nb_queue_pairs : max_nb_qp;
+	}
+
+	sched_ctx->max_nb_queue_pairs = max_nb_qp;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	struct scheduler_slave *slave;
+	struct rte_cryptodev_info dev_info;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("Too many slaves attached");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++)
+		if (sched_ctx->slaves[i].dev_id == slave_id) {
+			CS_LOG_ERR("Slave already added");
+			return -ENOTSUP;
+		}
+
+	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
+
+	rte_cryptodev_info_get(slave_id, &dev_info);
+
+	slave->dev_id = slave_id;
+	slave->dev_type = dev_info.dev_type;
+	sched_ctx->nb_slaves++;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		slave->dev_id = 0;
+		slave->dev_type = 0;
+		sched_ctx->nb_slaves--;
+
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i, slave_pos;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
+		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
+			break;
+	if (slave_pos == sched_ctx->nb_slaves) {
+		CS_LOG_ERR("Cannot find slave");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
+		CS_LOG_ERR("Failed to detach slave");
+		return -ENOTSUP;
+	}
+
+	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
+		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
+				sizeof(struct scheduler_slave));
+	}
+	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
+			sizeof(struct scheduler_slave));
+	sched_ctx->nb_slaves--;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	int ret;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (mode == sched_ctx->mode && mode != CDEV_SCHED_MODE_USERDEFINED)
+		return 0;
+
+	switch (mode) {
+	case CDEV_SCHED_MODE_ROUNDROBIN:
+		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
+				roundrobin_scheduler) < 0) {
+			CS_LOG_ERR("Failed to load scheduler");
+			return -1;
+		}
+		break;
+	case CDEV_SCHED_MODE_MIGRATION:
+	case CDEV_SCHED_MODE_FALLBACK:
+	default:
+		CS_LOG_ERR("Not yet supported");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	ret = (*sched_ctx->ops.create_private_ctx)(dev);
+	if (ret < 0) {
+		CS_LOG_ERR("Unable to create scheduler private context");
+		return ret;
+	}
+
+	sched_ctx->mode = mode;
+
+	return 0;
+}
+
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->mode;
+}
+
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	sched_ctx->reordering_enabled = enable_reorder;
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return (int)sched_ctx->reordering_enabled;
+}
+
+int
+rte_cryptodev_scheduler_ioctl(uint8_t scheduler_id, uint16_t ioctl_id,
+		void *ioctl_param) {
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (ioctl_id >= sched_ctx->ioctl_count) {
+		CS_LOG_ERR("Invalid IOCTL ID");
+		return -EINVAL;
+	}
+
+	return (*(sched_ctx->ioctls[ioctl_id]->ioctl))(ioctl_param);
+}
+
+int
+rte_cryptodev_scheduler_ioctl_count(uint8_t scheduler_id) {
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->ioctl_count;
+}
+
+int
+rte_cryptodev_scheduler_ioctl_list(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler_ioctl_description **ioctls_desc,
+		uint16_t nb_ioctls)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (nb_ioctls > sched_ctx->ioctl_count) {
+		CS_LOG_ERR("Invalid IOCTL number");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < nb_ioctls; i++) {
+		ioctls_desc[i]->request_id = sched_ctx->ioctls[i]->id;
+		ioctls_desc[i]->name = sched_ctx->ioctls[i]->name;
+		ioctls_desc[i]->description = sched_ctx->ioctls[i]->description;
+	}
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler) {
+
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	size_t size;
+
+	/* check device stopped */
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Device should be stopped before loading scheduler");
+		return -EBUSY;
+	}
+
+	strncpy(sched_ctx->name, scheduler->name,
+			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+	strncpy(sched_ctx->description, scheduler->description,
+			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+
+	/* load scheduler instance ioctls */
+	if (sched_ctx->ioctls)
+		rte_free(sched_ctx->ioctls);
+	if (scheduler->nb_ioctls) {
+		size = sizeof(struct rte_cryptodev_scheduler_ioctl) *
+				scheduler->nb_ioctls;
+		sched_ctx->ioctls = rte_zmalloc_socket(NULL, size, 0,
+				SOCKET_ID_ANY);
+		if (!sched_ctx->ioctls) {
+			CS_LOG_ERR("Failed to allocate memory");
+			return -ENOMEM;
+		}
+	}
+
+
+	for (i = 0; i < scheduler->nb_ioctls; i++) {
+		struct rte_cryptodev_scheduler_ioctl *ioctl =
+				sched_ctx->ioctls[scheduler->ioctls[i]->id];
+
+		strncpy(ioctl->name, scheduler->ioctls[i]->name,
+				RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+		strncpy(ioctl->description, scheduler->ioctls[i]->description,
+				RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+		ioctl->ioctl = scheduler->ioctls[i]->ioctl;
+	}
+
+	sched_ctx->ioctl_count = scheduler->nb_ioctls;
+
+	/* load scheduler instance options */
+	if (sched_ctx->options)
+		rte_free(sched_ctx->options);
+	if (scheduler->nb_options) {
+		size = sizeof(struct rte_cryptodev_scheduler_option) *
+				scheduler->nb_options;
+		sched_ctx->options = rte_zmalloc_socket(NULL, size, 0,
+				SOCKET_ID_ANY);
+		if (!sched_ctx->options) {
+			CS_LOG_ERR("Failed to allocate memory");
+			return -ENOMEM;
+		}
+	}
+
+	for (i = 0; i < scheduler->nb_options; i++) {
+		struct rte_cryptodev_scheduler_option *option =
+				sched_ctx->options[i];
+
+		strncpy(option->name, scheduler->options[i]->name,
+				RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+		strncpy(option->description, scheduler->options[i]->description,
+				RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+		option->option_parser = scheduler->options[i]->option_parser;
+	}
+	sched_ctx->nb_options = scheduler->nb_options;
+
+	/* load scheduler instance operations functions */
+	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
+	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
+	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
+	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
+	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
+	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
+
+	return 0;
+}
+
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..ee5eeb4
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,183 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#include <rte_cryptodev_scheduler_ioctls.h>
+#include <rte_cryptodev_scheduler_operations.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum rte_cryptodev_scheduler_mode {
+	CDEV_SCHED_MODE_NOT_SET = 0,
+	CDEV_SCHED_MODE_USERDEFINED,
+	CDEV_SCHED_MODE_ROUNDROBIN,
+	CDEV_SCHED_MODE_MIGRATION,
+	CDEV_SCHED_MODE_FALLBACK,
+	CDEV_SCHED_MODE_MULTICORE,
+
+	CDEV_SCHED_MODE_COUNT /* number of modes */
+};
+
+#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
+
+struct rte_cryptodev_scheduler;
+
+/**
+ * Load a user defined scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		scheduler	Pointer to the user defined scheduler
+ *
+ * @return
+ *	0 if loading successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler);
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Detach a attached crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be detached
+ *
+ * @return
+ *	0 if detaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
+
+/**
+ * Set the crypto ops reordering feature on/off
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		enable_reorder	set the crypto op reordering feature
+ *				0: disable reordering
+ *				1: enable reordering
+ *
+ * @return
+ *	0 if setting successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder);
+
+/**
+ * Get the current crypto ops reordering feature
+ *
+ * @param	dev_id		The target scheduler device ID
+ *
+ * @return
+ *	0 if reordering is disabled
+ *	1 if reordering is enabled
+ *	negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
+
+typedef int (*rte_cryptodev_scheduler_option_parser)(
+		const char *key, const char *value, void *extra_args);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct rte_cryptodev_scheduler_option {
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+
+	rte_cryptodev_scheduler_option_parser option_parser;
+};
+
+struct rte_cryptodev_scheduler {
+	const char *name;
+	const char *description;
+	struct rte_cryptodev_scheduler_option **options;
+	unsigned nb_options;
+
+	struct rte_cryptodev_scheduler_ioctl **ioctls;
+	unsigned nb_ioctls;
+
+	struct rte_cryptodev_scheduler_ops *ops;
+};
+
+extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h
new file mode 100644
index 0000000..c19a9d3
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_ioctls.h
@@ -0,0 +1,92 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _RTE_CRYPTODEV_SCHEDULER_IOCTLS
+#define _RTE_CRYPTODEV_SCHEDULER_IOCTLS
+
+#include <rte_cryptodev_scheduler.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_CRYPTODEV_SCHEDULER_IOCTL_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_IOCTL_DESC_MAX_LEN	(256)
+
+#define RTE_CRYPTODEV_SCHEDULER_MAX_NB_IOCTLS	(8)
+
+#define CDEV_SCHED_IOCTL_LIVE_MIGRATION_SCHED_STATE_GET		(1)
+#define CDEV_SCHED_IOCTL_LIVE_MIGRATION_SCHED_MIGRATE		(2)
+#define CDEV_SCHED_IOCTL_FALLBACK_SCHED_SET_PRIMARY		(3)
+
+struct ioctl_migration_scheduler_state_get {
+	uint8_t slave_id;
+	/**< Active crypto device id */
+	enum migration_scheduler_state {
+		MIGRATION_SCHEDULER_STATE_ACTIVE,
+		MIGRATION_SCHEDULER_STATE_AWAITING_MIGRATE,
+		MIGRATION_SCHEDULER_STATE_MIGRATION
+	} state;
+	/**< Migration Scheduler State */
+};
+
+int
+rte_cryptodev_scheduler_ioctl(uint8_t scheduler_id, uint16_t request_id,
+		void *request_params);
+
+int
+rte_cryptodev_scheduler_ioctl_count(uint8_t scheduler_id);
+
+struct rte_cryptodev_scheduler_ioctl_description {
+	uint16_t request_id;
+	const char *name;
+	const char *description;
+};
+
+int
+rte_cryptodev_scheduler_ioctl_list(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler_ioctl_description **ioctls_desc,
+		uint16_t nb_ioctls);
+
+typedef int (*rte_cryptodev_scheduler_ioctl_fn)(void *request_params);
+
+struct rte_cryptodev_scheduler_ioctl {
+	int id;
+	char name[RTE_CRYPTODEV_SCHEDULER_IOCTL_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_IOCTL_DESC_MAX_LEN];
+
+	rte_cryptodev_scheduler_ioctl_fn ioctl;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTODEV_SCHEDULER_IOCTLS */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
new file mode 100644
index 0000000..ab8595b
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
@@ -0,0 +1,71 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+
+typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
+typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
+
+typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
+		struct rte_cryptodev *dev, uint16_t qp_id);
+
+typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
+		struct rte_cryptodev *dev);
+
+struct rte_cryptodev_scheduler_ops {
+	rte_cryptodev_scheduler_slave_attach_t slave_attach;
+	rte_cryptodev_scheduler_slave_attach_t slave_detach;
+
+	rte_cryptodev_scheduler_start_t scheduler_start;
+	rte_cryptodev_scheduler_stop_t scheduler_stop;
+
+	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
+
+	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..0510f68
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,12 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_load_user_scheduler;
+	rte_cryptodev_scheduler_slave_attach;
+	rte_cryptodev_scheduler_slave_detach;
+	rte_crpytodev_scheduler_mode_set;
+	rte_crpytodev_scheduler_mode_get;
+	rte_cryptodev_scheduler_ordering_set;
+	rte_cryptodev_scheduler_ordering_get;
+
+} DPDK_17.02;
\ No newline at end of file
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..0c13b55
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,168 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+static uint16_t
+scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint16_t
+scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint32_t unique_name_id;
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct rte_crypto_vdev_init_params *init_params)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (snprintf(crypto_dev_name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%u",
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD), unique_name_id++) < 0) {
+		CS_LOG_ERR("driver %s: failed to create unique cryptodev "
+			"name", name);
+		return -EFAULT;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_ctx),
+			init_params->socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->enqueue_burst = scheduler_enqueue_burst;
+	dev->dequeue_burst = scheduler_dequeue_burst;
+
+	sched_ctx = dev->data->dev_private;
+	sched_ctx->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	sched_ctx = dev->data->dev_private;
+
+	if (sched_ctx->nb_slaves) {
+		uint32_t i;
+
+		for (i = 0; i < sched_ctx->nb_slaves; i++)
+			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
+					sched_ctx->slaves[i].dev_id);
+	}
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct rte_crypto_vdev_init_params init_params = {
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+		rte_socket_id()
+	};
+
+	rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.max_nb_sessions);
+
+	return cryptodev_scheduler_create(name, &init_params);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int>");
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..972a355
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,495 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+	int ret = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static int
+update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (sched_ctx->reordering_enabled) {
+		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (!buff_size)
+			return 0;
+
+		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+			dev->data->dev_id, qp_id) < 0) {
+			CS_LOG_ERR("failed to create unique reorder buffer "
+					"name");
+			return -ENOMEM;
+		}
+
+		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
+				rte_socket_id(), buff_size);
+		if (!qp_ctx->reorder_buf) {
+			CS_LOG_ERR("failed to create reorder buffer");
+			return -ENOMEM;
+		}
+	} else {
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+	}
+
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = update_reorder_buff(dev, i);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to update reorder buffer");
+			return ret;
+		}
+	}
+
+	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
+		CS_LOG_ERR("Scheduler mode is not set");
+		return -1;
+	}
+
+	if (!sched_ctx->nb_slaves) {
+		CS_LOG_ERR("No slave in the scheduler");
+		return -1;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
+			CS_LOG_ERR("Failed to attach slave");
+			return -ENOTSUP;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
+
+	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
+		CS_LOG_ERR("Scheduler start failed");
+		return -1;
+	}
+
+	/* start all slaves */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to start slave dev %u",
+					slave_dev_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+
+	if (!dev->data->dev_started)
+		return;
+
+	/* stop all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->dev_stop)(slave_dev);
+	}
+
+	if (*sched_ctx->ops.scheduler_stop)
+		(*sched_ctx->ops.scheduler_stop)(dev);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if (*sched_ctx->ops.slave_detach)
+			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
+	}
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+	int ret;
+
+	/* the dev should be stopped before being closed */
+	if (dev->data->dev_started)
+		return -EBUSY;
+
+	/* close all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
+		if (ret < 0)
+			return ret;
+	}
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (qp_ctx->private_qp_ctx) {
+			rte_free(qp_ctx->private_qp_ctx);
+			qp_ctx->private_qp_ctx = NULL;
+		}
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	if (sched_ctx->ioctls)
+		rte_free(sched_ctx->ioctls);
+
+	if (sched_ctx->options)
+		rte_free(sched_ctx->options);
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+		struct rte_cryptodev_stats slave_stats = {0};
+
+		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
+
+		stats->enqueued_count += slave_stats.enqueued_count;
+		stats->dequeued_count += slave_stats.dequeued_count;
+
+		stats->enqueue_err_count += slave_stats.enqueue_err_count;
+		stats->dequeue_err_count += slave_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->stats_reset)(slave_dev);
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	unsigned max_nb_sessions = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+	unsigned i;
+
+	if (!dev_info)
+		return;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev_info slave_info;
+
+		rte_cryptodev_info_get(slave_dev_id, &slave_info);
+		max_nb_sessions = slave_info.sym.max_nb_sessions <
+				max_nb_sessions ?
+				slave_info.sym.max_nb_sessions :
+				max_nb_sessions;
+	}
+
+	dev_info->dev_type = dev->dev_type;
+	dev_info->feature_flags = dev->feature_flags;
+	dev_info->capabilities = sched_ctx->capabilities;
+	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
+	dev_info->sym.max_nb_sessions = max_nb_sessions;
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (!qp_ctx)
+		return 0;
+
+	if (qp_ctx->reorder_buf)
+		rte_reorder_free(qp_ctx->reorder_buf);
+	if (qp_ctx->private_qp_ctx)
+		rte_free(qp_ctx->private_qp_ctx);
+
+	rte_free(qp_ctx);
+	dev->data->queue_pairs[qp_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx;
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"CRYTO_SCHE PMD %u QP %u",
+			dev->data->dev_id, qp_id) < 0) {
+		CS_LOG_ERR("Failed to create unique queue pair name");
+		return -EFAULT;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
+			socket_id);
+	if (qp_ctx == NULL)
+		return -ENOMEM;
+
+	dev->data->queue_pairs[qp_id] = qp_ctx;
+
+	if (*sched_ctx->ops.config_queue_pair) {
+		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
+			CS_LOG_ERR("Unable to configure queue pair");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static unsigned
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sess(struct scheduler_ctx *sched_ctx,
+		struct rte_crypto_sym_xform *xform,
+		struct scheduler_session *sess,
+		uint32_t create)
+{
+	unsigned i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct scheduler_slave *slave = &sched_ctx->slaves[i];
+		struct rte_cryptodev *dev = &rte_cryptodev_globals->
+				devs[slave->dev_id];
+
+		if (sess->sessions[i]) {
+			if (create)
+				continue;
+			/* !create */
+			(*dev->dev_ops->session_clear)(dev,
+					(void *)sess->sessions[i]);
+			sess->sessions[i] = NULL;
+		} else {
+			if (!create)
+				continue;
+			/* create */
+			sess->sessions[i] =
+					rte_cryptodev_sym_session_create(
+							slave->dev_id, xform);
+			if (!sess->sessions[i]) {
+				config_slave_sess(sched_ctx, NULL, sess, 0);
+				return -1;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	config_slave_sess(sched_ctx, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..550fdcc
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,122 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_cryptodev_scheduler_ioctls.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+struct scheduler_slave {
+	uint8_t dev_id;
+	uint16_t qp_id;
+	uint32_t nb_inflight_cops;
+
+	enum rte_cryptodev_type dev_type;
+};
+
+struct scheduler_ctx {
+	void *private_ctx;
+	/**< private scheduler context pointer */
+
+	struct rte_cryptodev_capabilities *capabilities;
+	unsigned nb_capabilities;
+
+	unsigned max_nb_queue_pairs;
+
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	unsigned nb_slaves;
+
+	enum rte_cryptodev_scheduler_mode mode;
+
+	uint32_t ioctl_count;
+	struct rte_cryptodev_scheduler_ioctl **ioctls;
+
+	uint32_t nb_options;
+	struct rte_cryptodev_scheduler_option **options;
+
+	struct rte_cryptodev_scheduler_ops ops;
+
+	uint8_t reordering_enabled;
+
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+} __rte_cache_aligned;
+
+struct scheduler_qp_ctx {
+	void *private_qp_ctx;
+
+	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
+	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
+
+	struct rte_reorder_buffer *reorder_buf;
+	uint32_t seqn;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
new file mode 100644
index 0000000..be0b7fd
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -0,0 +1,419 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_scheduler_operations.h>
+
+#include "scheduler_pmd_private.h"
+
+struct roundround_scheduler_ctx {
+};
+
+struct rr_scheduler_qp_ctx {
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	unsigned nb_slaves;
+
+	unsigned last_enq_slave_idx;
+	unsigned last_deq_slave_idx;
+};
+
+static uint16_t
+schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+static uint16_t
+schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			gen_qp_ctx->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++) {
+		rte_prefetch0(ops[i]->sym->session);
+		rte_prefetch0(ops[i]->sym->m_src);
+	}
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 4]->sym->m_src);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->m_src);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->m_src);
+		rte_prefetch0(ops[i + 7]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+
+static uint16_t
+schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	struct scheduler_slave *slave;
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t nb_deq_ops;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	last_slave_idx += 1;
+	if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+		last_slave_idx = 0;
+
+	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+static uint16_t
+schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
+	struct scheduler_slave *slave;
+	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint16_t nb_deq_ops, nb_drained_mbufs;
+	const uint16_t nb_op_ops = nb_ops;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t i;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, op_ops, nb_ops);
+
+	rr_qp_ctx->last_deq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_deq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; i < nb_deq_ops - 8; i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_memcpy(mbuf1->buf_addr, &op_ops[i+1], sizeof(op_ops[i+1]));
+		rte_memcpy(mbuf2->buf_addr, &op_ops[i+2], sizeof(op_ops[i+2]));
+		rte_memcpy(mbuf3->buf_addr, &op_ops[i+3], sizeof(op_ops[i+3]));
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
+			nb_ops);
+	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; i < nb_drained_mbufs - 8; i += 4) {
+		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->buf_addr;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->buf_addr;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->buf_addr;
+
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 1]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 2]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 3]->buf_addr = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_drained_mbufs; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->buf_addr;
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+	}
+
+	return nb_drained_mbufs;
+}
+
+static int
+slave_attach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+slave_detach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+scheduler_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+		struct rr_scheduler_qp_ctx *rr_qp_ctx =
+				qp_ctx->private_qp_ctx;
+		uint32_t j;
+		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
+
+		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
+				sizeof(struct scheduler_slave));
+		for (j = 0; j < sched_ctx->nb_slaves; j++) {
+			rr_qp_ctx->slaves[j].dev_id =
+					sched_ctx->slaves[i].dev_id;
+			rr_qp_ctx->slaves[j].qp_id = qp_id;
+		}
+
+		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
+
+		rr_qp_ctx->last_enq_slave_idx = 0;
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+		if (sched_ctx->reordering_enabled) {
+			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
+			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
+		} else {
+			qp_ctx->schedule_enqueue = &schedule_enqueue;
+			qp_ctx->schedule_dequeue = &schedule_dequeue;
+		}
+	}
+
+	return 0;
+}
+
+static int
+scheduler_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+static int
+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+	struct rr_scheduler_qp_ctx *rr_qp_ctx;
+
+	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
+			rte_socket_id());
+	if (!rr_qp_ctx) {
+		CS_LOG_ERR("failed allocate memory for private queue pair");
+		return -ENOMEM;
+	}
+
+	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
+
+	return 0;
+}
+
+static int
+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+struct rte_cryptodev_scheduler_ops ops = {
+	slave_attach,
+	slave_detach,
+	scheduler_start,
+	scheduler_stop,
+	scheduler_config_qp,
+	scheduler_create_private_ctx
+};
+
+struct rte_cryptodev_scheduler scheduler = {
+		.name = "roundrobin-scheduler",
+		.description = "scheduler which will round robin burst across "
+				"slave crypto devices",
+		.options = NULL,
+		.ops = &ops,
+		.ioctls = NULL
+};
+
+
+struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 8f63e8f..61a3ce0 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,7 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +78,9 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+
+	RTE_CRYPTODEV_TYPE_COUNT
 };
 
 extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..ee34688 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -98,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
@@ -145,6 +145,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER)  += -lrte_pmd_crypto_scheduler
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v4] Scheduler: add driver for scheduler crypto pmd
  2017-01-03 17:16 ` [dpdk-dev] [PATCH v3] " Fan Zhang
@ 2017-01-17 10:57   ` Fan Zhang
  2017-01-17 13:19     ` [dpdk-dev] [PATCH v5] crypto/scheduler: " Fan Zhang
  0 siblings, 1 reply; 42+ messages in thread
From: Fan Zhang @ 2017-01-17 10:57 UTC (permalink / raw)
  To: dev; +Cc: pablo.de.lara.guarch, Declan Doherty

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

As the initial version, the scheduler PMD currently supports only the
Round-robin mode, which distributes the enqueued burst of crypto ops
among its slaves in a round-robin manner. This mode may help to fill
the throughput gap between the physical core and the existing cryptodevs
to increase the overall performance. Moreover, the scheduler PMD is
provided the APIs for user to create his/her own scheduler.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
- Scheduler PMD shares same EAL commandline options as other cryptodevs.
  However, apart from socket_id, the rest of cryptodev options are
  ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
  options are set as the minimum values of the attached slaves'. For
  example, a scheduler cryptodev is attached 2 cryptodevs with
  max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
  max_nb_queue_pairs will be automatically updated as 2.

- In addition, an extra option "slave" is added. The user can attach one
  or more slave cryptodevs initially by passing their names with this
  option. Here is an example:

  ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_
  mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,
  slave=aesni_mb_2" ...

  Remember the software cryptodevs to be attached shall be declared before
  the scheduler PMD, otherwise the scheduler will fail to locate the
  slave(s) and report error.

- The scheduler cryptodev cannot be started unless the scheduling mode
  is set and at least one slave is attached. Also, to configure the
  scheduler in the run-time, like attach/detach slave(s), change
  scheduling mode, or enable/disable crypto op ordering, one should stop
  the scheduler first, otherwise an error will be returned.

Changes in v4:
Fixed a few bugs.
Added slave EAL commandline option support

Changes in v3:
Fixed config/common_base.

Changes in v2:
New approaches in API to suit future scheduling modes.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_base                                 |   6 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/scheduler/Makefile                  |  66 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 461 +++++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 167 +++++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 360 +++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 489 +++++++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 115 +++++
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 417 ++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   4 +
 mk/rte.app.mk                                      |   3 +-
 13 files changed, 2171 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

diff --git a/config/common_base b/config/common_base
index 8e9dcfa..3d33a2d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -409,6 +409,12 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for ZUC device
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..cdd3c94 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..a7c5026
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,66 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2015 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..89ff11d
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,461 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_jhash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_malloc.h>
+
+#include "scheduler_pmd_private.h"
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static uint32_t
+sync_caps(struct rte_cryptodev_capabilities *caps,
+		uint32_t nb_caps,
+		const struct rte_cryptodev_capabilities *slave_caps)
+{
+	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
+	uint32_t i;
+
+	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_slave_caps++;
+
+	if (nb_caps == 0) {
+		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
+		return nb_slave_caps;
+	}
+
+	for (i = 0; i < sync_nb_caps; i++) {
+		struct rte_cryptodev_capabilities *cap = &caps[i];
+		uint32_t j;
+
+		for (j = 0; j < nb_slave_caps; j++) {
+			const struct rte_cryptodev_capabilities *s_cap =
+					&slave_caps[i];
+
+			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
+					cap->sym.xform_type)
+				continue;
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (s_cap->sym.auth.algo !=
+						cap->sym.auth.algo)
+					continue;
+
+				cap->sym.auth.digest_size.min =
+					s_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					s_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					s_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					s_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+			}
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (s_cap->sym.cipher.algo !=
+						cap->sym.cipher.algo)
+					continue;
+
+			/* no common cap found */
+			break;
+		}
+
+		if (j < nb_slave_caps)
+			continue;
+
+		/* remove a uncommon cap from the array */
+		for (j = i; j < sync_nb_caps - 1; j++)
+			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
+
+		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
+		sync_nb_caps--;
+	}
+
+	return sync_nb_caps;
+}
+
+static int
+update_scheduler_capability(struct scheduler_ctx *sched_ctx)
+{
+	struct rte_cryptodev_capabilities tmp_caps[256] = {0};
+	uint32_t nb_caps = 0, i;
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
+		if (nb_caps == 0)
+			return -1;
+	}
+
+	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
+			sizeof(struct rte_cryptodev_capabilities) *
+			(nb_caps + 1), 0, SOCKET_ID_ANY);
+	if (!sched_ctx->capabilities)
+		return -ENOMEM;
+
+	rte_memcpy(sched_ctx->capabilities, tmp_caps,
+			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
+
+	return 0;
+}
+
+static void
+update_scheduler_feature_flag(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	dev->feature_flags = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		dev->feature_flags |= dev_info.feature_flags;
+	}
+}
+
+static void
+update_max_nb_qp(struct scheduler_ctx *sched_ctx)
+{
+	uint32_t i;
+	uint32_t max_nb_qp;
+
+	if (!sched_ctx->nb_slaves)
+		return;
+
+	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
+				dev_info.max_nb_queue_pairs : max_nb_qp;
+	}
+
+	sched_ctx->max_nb_queue_pairs = max_nb_qp;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	struct scheduler_slave *slave;
+	struct rte_cryptodev_info dev_info;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("Too many slaves attached");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++)
+		if (sched_ctx->slaves[i].dev_id == slave_id) {
+			CS_LOG_ERR("Slave already added");
+			return -ENOTSUP;
+		}
+
+	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
+
+	rte_cryptodev_info_get(slave_id, &dev_info);
+
+	slave->dev_id = slave_id;
+	slave->dev_type = dev_info.dev_type;
+	sched_ctx->nb_slaves++;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		slave->dev_id = 0;
+		slave->dev_type = 0;
+		sched_ctx->nb_slaves--;
+
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i, slave_pos;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
+		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
+			break;
+	if (slave_pos == sched_ctx->nb_slaves) {
+		CS_LOG_ERR("Cannot find slave");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
+		CS_LOG_ERR("Failed to detach slave");
+		return -ENOTSUP;
+	}
+
+	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
+		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
+				sizeof(struct scheduler_slave));
+	}
+	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
+			sizeof(struct scheduler_slave));
+	sched_ctx->nb_slaves--;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	int ret;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (mode == sched_ctx->mode && mode != CDEV_SCHED_MODE_USERDEFINED)
+		return 0;
+
+	switch (mode) {
+	case CDEV_SCHED_MODE_ROUNDROBIN:
+		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
+				roundrobin_scheduler) < 0) {
+			CS_LOG_ERR("Failed to load scheduler");
+			return -1;
+		}
+		break;
+	case CDEV_SCHED_MODE_MIGRATION:
+	case CDEV_SCHED_MODE_FALLBACK:
+	default:
+		CS_LOG_ERR("Not yet supported");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	ret = (*sched_ctx->ops.create_private_ctx)(dev);
+	if (ret < 0) {
+		CS_LOG_ERR("Unable to create scheduler private context");
+		return ret;
+	}
+
+	sched_ctx->mode = mode;
+
+	return 0;
+}
+
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->mode;
+}
+
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	sched_ctx->reordering_enabled = enable_reorder;
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return (int)sched_ctx->reordering_enabled;
+}
+
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler) {
+
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	/* check device stopped */
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Device should be stopped before loading scheduler");
+		return -EBUSY;
+	}
+
+	strncpy(sched_ctx->name, scheduler->name,
+			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+	strncpy(sched_ctx->description, scheduler->description,
+			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+
+	/* load scheduler instance operations functions */
+	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
+	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
+	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
+	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
+	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
+	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
+
+	return 0;
+}
+
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..b57c690
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,167 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#include <rte_cryptodev_scheduler_operations.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum rte_cryptodev_scheduler_mode {
+	CDEV_SCHED_MODE_NOT_SET = 0,
+	CDEV_SCHED_MODE_USERDEFINED,
+	CDEV_SCHED_MODE_ROUNDROBIN,
+	CDEV_SCHED_MODE_MIGRATION,
+	CDEV_SCHED_MODE_FALLBACK,
+	CDEV_SCHED_MODE_MULTICORE,
+
+	CDEV_SCHED_MODE_COUNT /* number of modes */
+};
+
+#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
+
+struct rte_cryptodev_scheduler;
+
+/**
+ * Load a user defined scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		scheduler	Pointer to the user defined scheduler
+ *
+ * @return
+ *	0 if loading successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler);
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Detach a attached crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be detached
+ *
+ * @return
+ *	0 if detaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
+
+/**
+ * Set the crypto ops reordering feature on/off
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		enable_reorder	set the crypto op reordering feature
+ *				0: disable reordering
+ *				1: enable reordering
+ *
+ * @return
+ *	0 if setting successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder);
+
+/**
+ * Get the current crypto ops reordering feature
+ *
+ * @param	dev_id		The target scheduler device ID
+ *
+ * @return
+ *	0 if reordering is disabled
+ *	1 if reordering is enabled
+ *	negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct rte_cryptodev_scheduler {
+	const char *name;
+	const char *description;
+
+	struct rte_cryptodev_scheduler_ops *ops;
+};
+
+extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
new file mode 100644
index 0000000..ab8595b
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
@@ -0,0 +1,71 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+
+typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
+typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
+
+typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
+		struct rte_cryptodev *dev, uint16_t qp_id);
+
+typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
+		struct rte_cryptodev *dev);
+
+struct rte_cryptodev_scheduler_ops {
+	rte_cryptodev_scheduler_slave_attach_t slave_attach;
+	rte_cryptodev_scheduler_slave_attach_t slave_detach;
+
+	rte_cryptodev_scheduler_start_t scheduler_start;
+	rte_cryptodev_scheduler_stop_t scheduler_stop;
+
+	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
+
+	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..0510f68
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,12 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_load_user_scheduler;
+	rte_cryptodev_scheduler_slave_attach;
+	rte_cryptodev_scheduler_slave_detach;
+	rte_crpytodev_scheduler_mode_set;
+	rte_crpytodev_scheduler_mode_get;
+	rte_cryptodev_scheduler_ordering_set;
+	rte_cryptodev_scheduler_ordering_get;
+
+} DPDK_17.02;
\ No newline at end of file
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..2485797
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,360 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+struct scheduler_init_params {
+	struct rte_crypto_vdev_init_params def_p;
+	uint32_t nb_slaves;
+	uint8_t slaves[MAX_SLAVES_NUM];
+};
+
+#define RTE_CRYPTODEV_VDEV_NAME			("name")
+#define RTE_CRYPTODEV_VDEV_SLAVE		("slave")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG	("max_nb_queue_pairs")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG	("max_nb_sessions")
+#define RTE_CRYPTODEV_VDEV_SOCKET_ID		("socket_id")
+
+const char *scheduler_valid_params[] = {
+	RTE_CRYPTODEV_VDEV_NAME,
+	RTE_CRYPTODEV_VDEV_SLAVE,
+	RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+	RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+	RTE_CRYPTODEV_VDEV_SOCKET_ID
+};
+
+static uint16_t
+scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint16_t
+scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static int
+attach_init_slaves(uint8_t scheduler_id,
+		const uint8_t *slaves, const uint8_t nb_slaves)
+{
+	uint8_t i;
+
+	for (i = 0; i < nb_slaves; i++) {
+		struct rte_cryptodev *dev =
+				rte_cryptodev_pmd_get_dev(slaves[i]);
+		int status = rte_cryptodev_scheduler_slave_attach(
+				scheduler_id, slaves[i]);
+
+		if (status < 0 || !dev) {
+			CS_LOG_ERR("Failed to attach slave cryptodev "
+					"%u.\n", slaves[i]);
+			return status;
+		}
+
+		RTE_LOG(INFO, PMD, "  Attached slave cryptodev %s\n",
+				dev->data->name);
+	}
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct scheduler_init_params *init_params)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (init_params->def_p.name[0] == '\0') {
+		int ret = rte_cryptodev_pmd_create_dev_name(
+				init_params->def_p.name,
+				RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+		if (ret < 0) {
+			CS_LOG_ERR("failed to create unique name");
+			return ret;
+		}
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_ctx),
+			init_params->def_p.socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->enqueue_burst = scheduler_enqueue_burst;
+	dev->dequeue_burst = scheduler_dequeue_burst;
+
+	sched_ctx = dev->data->dev_private;
+	sched_ctx->max_nb_queue_pairs =
+			init_params->def_p.max_nb_queue_pairs;
+
+	return attach_init_slaves(dev->data->dev_id, init_params->slaves,
+			init_params->nb_slaves);
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	sched_ctx = dev->data->dev_private;
+
+	if (sched_ctx->nb_slaves) {
+		uint32_t i;
+
+		for (i = 0; i < sched_ctx->nb_slaves; i++)
+			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
+					sched_ctx->slaves[i].dev_id);
+	}
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+static uint8_t
+number_of_sockets(void)
+{
+	int sockets = 0;
+	int i;
+	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
+
+	for (i = 0; ((i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL)); i++) {
+		if (sockets < ms[i].socket_id)
+			sockets = ms[i].socket_id;
+	}
+
+	/* Number of sockets = maximum socket_id + 1 */
+	return ++sockets;
+}
+
+/** Parse integer from integer argument */
+static int
+parse_integer_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	int *i = (int *) extra_args;
+
+	*i = atoi(value);
+	if (*i < 0) {
+		CS_LOG_ERR("Argument has to be positive.\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse name */
+static int
+parse_name_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct rte_crypto_vdev_init_params *params = extra_args;
+
+	if (strlen(value) >= RTE_CRYPTODEV_NAME_MAX_LEN - 1) {
+		CS_LOG_ERR("Invalid name %s, should be less than "
+				"%u bytes.\n", value,
+				RTE_CRYPTODEV_NAME_MAX_LEN - 1);
+		return -1;
+	}
+
+	strncpy(params->name, value, RTE_CRYPTODEV_NAME_MAX_LEN);
+
+	return 0;
+}
+
+/** Parse slave */
+static int
+parse_slave_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct scheduler_init_params *param = extra_args;
+	struct rte_cryptodev *dev =
+			rte_cryptodev_pmd_get_named_dev(value);
+
+	if (!dev) {
+		RTE_LOG(ERR, PMD, "Invalid slave name %s.\n", value);
+		return -1;
+	}
+
+	if (param->nb_slaves >= MAX_SLAVES_NUM - 1) {
+		CS_LOG_ERR("Too many slaves.\n");
+		return -1;
+	}
+
+	param->slaves[param->nb_slaves] = dev->data->dev_id;
+	param->nb_slaves++;
+
+	return 0;
+}
+
+static int
+scheduler_parse_init_params(struct scheduler_init_params *params,
+		const char *input_args)
+{
+	struct rte_kvargs *kvlist = NULL;
+	int ret = 0;
+
+	if (params == NULL)
+		return -EINVAL;
+
+	if (input_args) {
+		kvlist = rte_kvargs_parse(input_args,
+				scheduler_valid_params);
+		if (kvlist == NULL)
+			return -1;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_queue_pairs);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_sessions);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SOCKET_ID,
+				&parse_integer_arg,
+				&params->def_p.socket_id);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_NAME,
+				&parse_name_arg,
+				&params->def_p);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SLAVE,
+				&parse_slave_arg, params);
+		if (ret < 0)
+			goto free_kvlist;
+
+		if (params->def_p.socket_id >= number_of_sockets()) {
+			CDEV_LOG_ERR("Invalid socket id specified to create "
+				"the virtual crypto device on");
+			goto free_kvlist;
+		}
+	}
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct scheduler_init_params init_params = {
+		.def_p = {
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+			rte_socket_id(),
+			""
+		},
+		.nb_slaves = 0,
+		.slaves = {0}
+	};
+
+	scheduler_parse_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.def_p.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.def_p.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.def_p.max_nb_sessions);
+	if (init_params.def_p.name[0] != '\0')
+		RTE_LOG(INFO, PMD, "  User defined name = %s\n",
+			init_params.def_p.name);
+
+	return cryptodev_scheduler_create(name, &init_params);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int>");
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..23b8498
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,489 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static int
+update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (sched_ctx->reordering_enabled) {
+		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (!buff_size)
+			return 0;
+
+		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+			dev->data->dev_id, qp_id) < 0) {
+			CS_LOG_ERR("failed to create unique reorder buffer "
+					"name");
+			return -ENOMEM;
+		}
+
+		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
+				rte_socket_id(), buff_size);
+		if (!qp_ctx->reorder_buf) {
+			CS_LOG_ERR("failed to create reorder buffer");
+			return -ENOMEM;
+		}
+	} else {
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+	}
+
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = update_reorder_buff(dev, i);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to update reorder buffer");
+			return ret;
+		}
+	}
+
+	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
+		CS_LOG_ERR("Scheduler mode is not set");
+		return -1;
+	}
+
+	if (!sched_ctx->nb_slaves) {
+		CS_LOG_ERR("No slave in the scheduler");
+		return -1;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
+			CS_LOG_ERR("Failed to attach slave");
+			return -ENOTSUP;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
+
+	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
+		CS_LOG_ERR("Scheduler start failed");
+		return -1;
+	}
+
+	/* start all slaves */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to start slave dev %u",
+					slave_dev_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	if (!dev->data->dev_started)
+		return;
+
+	/* stop all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->dev_stop)(slave_dev);
+	}
+
+	if (*sched_ctx->ops.scheduler_stop)
+		(*sched_ctx->ops.scheduler_stop)(dev);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if (*sched_ctx->ops.slave_detach)
+			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
+	}
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	/* the dev should be stopped before being closed */
+	if (dev->data->dev_started)
+		return -EBUSY;
+
+	/* close all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
+		if (ret < 0)
+			return ret;
+	}
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (qp_ctx->private_qp_ctx) {
+			rte_free(qp_ctx->private_qp_ctx);
+			qp_ctx->private_qp_ctx = NULL;
+		}
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+		struct rte_cryptodev_stats slave_stats = {0};
+
+		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
+
+		stats->enqueued_count += slave_stats.enqueued_count;
+		stats->dequeued_count += slave_stats.dequeued_count;
+
+		stats->enqueue_err_count += slave_stats.enqueue_err_count;
+		stats->dequeue_err_count += slave_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->stats_reset)(slave_dev);
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t max_nb_sessions = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+	uint32_t i;
+
+	if (!dev_info)
+		return;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev_info slave_info;
+
+		rte_cryptodev_info_get(slave_dev_id, &slave_info);
+		max_nb_sessions = slave_info.sym.max_nb_sessions <
+				max_nb_sessions ?
+				slave_info.sym.max_nb_sessions :
+				max_nb_sessions;
+	}
+
+	dev_info->dev_type = dev->dev_type;
+	dev_info->feature_flags = dev->feature_flags;
+	dev_info->capabilities = sched_ctx->capabilities;
+	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
+	dev_info->sym.max_nb_sessions = max_nb_sessions;
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (!qp_ctx)
+		return 0;
+
+	if (qp_ctx->reorder_buf)
+		rte_reorder_free(qp_ctx->reorder_buf);
+	if (qp_ctx->private_qp_ctx)
+		rte_free(qp_ctx->private_qp_ctx);
+
+	rte_free(qp_ctx);
+	dev->data->queue_pairs[qp_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx;
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"CRYTO_SCHE PMD %u QP %u",
+			dev->data->dev_id, qp_id) < 0) {
+		CS_LOG_ERR("Failed to create unique queue pair name");
+		return -EFAULT;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
+			socket_id);
+	if (qp_ctx == NULL)
+		return -ENOMEM;
+
+	dev->data->queue_pairs[qp_id] = qp_ctx;
+
+	if (*sched_ctx->ops.config_queue_pair) {
+		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
+			CS_LOG_ERR("Unable to configure queue pair");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static uint32_t
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sess(struct scheduler_ctx *sched_ctx,
+		struct rte_crypto_sym_xform *xform,
+		struct scheduler_session *sess,
+		uint32_t create)
+{
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct scheduler_slave *slave = &sched_ctx->slaves[i];
+		struct rte_cryptodev *dev = &rte_cryptodev_globals->
+				devs[slave->dev_id];
+
+		if (sess->sessions[i]) {
+			if (create)
+				continue;
+			/* !create */
+			(*dev->dev_ops->session_clear)(dev,
+					(void *)sess->sessions[i]);
+			sess->sessions[i] = NULL;
+		} else {
+			if (!create)
+				continue;
+			/* create */
+			sess->sessions[i] =
+					rte_cryptodev_sym_session_create(
+							slave->dev_id, xform);
+			if (!sess->sessions[i]) {
+				config_slave_sess(sched_ctx, NULL, sess, 0);
+				return -1;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	config_slave_sess(sched_ctx, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..93d620b
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+struct scheduler_slave {
+	uint8_t dev_id;
+	uint16_t qp_id;
+	uint32_t nb_inflight_cops;
+
+	enum rte_cryptodev_type dev_type;
+};
+
+struct scheduler_ctx {
+	void *private_ctx;
+	/**< private scheduler context pointer */
+
+	struct rte_cryptodev_capabilities *capabilities;
+	uint32_t nb_capabilities;
+
+	uint32_t max_nb_queue_pairs;
+
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	enum rte_cryptodev_scheduler_mode mode;
+
+	struct rte_cryptodev_scheduler_ops ops;
+
+	uint8_t reordering_enabled;
+
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+} __rte_cache_aligned;
+
+struct scheduler_qp_ctx {
+	void *private_qp_ctx;
+
+	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
+	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
+
+	struct rte_reorder_buffer *reorder_buf;
+	uint32_t seqn;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
new file mode 100644
index 0000000..abdacc6
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -0,0 +1,417 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_scheduler_operations.h>
+
+#include "scheduler_pmd_private.h"
+
+struct roundround_scheduler_ctx {
+};
+
+struct rr_scheduler_qp_ctx {
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	uint32_t last_enq_slave_idx;
+	uint32_t last_deq_slave_idx;
+};
+
+static uint16_t
+schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+static uint16_t
+schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			gen_qp_ctx->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++) {
+		rte_prefetch0(ops[i]->sym->session);
+		rte_prefetch0(ops[i]->sym->m_src);
+	}
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 4]->sym->m_src);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->m_src);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->m_src);
+		rte_prefetch0(ops[i + 7]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+
+static uint16_t
+schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	struct scheduler_slave *slave;
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t nb_deq_ops;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	last_slave_idx += 1;
+	if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+		last_slave_idx = 0;
+
+	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+static uint16_t
+schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
+	struct scheduler_slave *slave;
+	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint16_t nb_deq_ops, nb_drained_mbufs;
+	const uint16_t nb_op_ops = nb_ops;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t i;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, op_ops, nb_ops);
+
+	rr_qp_ctx->last_deq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_deq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; i < nb_deq_ops - 8; i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_memcpy(mbuf1->buf_addr, &op_ops[i+1], sizeof(op_ops[i+1]));
+		rte_memcpy(mbuf2->buf_addr, &op_ops[i+2], sizeof(op_ops[i+2]));
+		rte_memcpy(mbuf3->buf_addr, &op_ops[i+3], sizeof(op_ops[i+3]));
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
+			nb_ops);
+	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; i < nb_drained_mbufs - 8; i += 4) {
+		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->buf_addr;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->buf_addr;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->buf_addr;
+
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 1]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 2]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 3]->buf_addr = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_drained_mbufs; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->buf_addr;
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+	}
+
+	return nb_drained_mbufs;
+}
+
+static int
+slave_attach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+slave_detach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+scheduler_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+		struct rr_scheduler_qp_ctx *rr_qp_ctx =
+				qp_ctx->private_qp_ctx;
+		uint32_t j;
+		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
+
+		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
+				sizeof(struct scheduler_slave));
+		for (j = 0; j < sched_ctx->nb_slaves; j++) {
+			rr_qp_ctx->slaves[j].dev_id =
+					sched_ctx->slaves[i].dev_id;
+			rr_qp_ctx->slaves[j].qp_id = qp_id;
+		}
+
+		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
+
+		rr_qp_ctx->last_enq_slave_idx = 0;
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+		if (sched_ctx->reordering_enabled) {
+			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
+			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
+		} else {
+			qp_ctx->schedule_enqueue = &schedule_enqueue;
+			qp_ctx->schedule_dequeue = &schedule_dequeue;
+		}
+	}
+
+	return 0;
+}
+
+static int
+scheduler_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+static int
+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+	struct rr_scheduler_qp_ctx *rr_qp_ctx;
+
+	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
+			rte_socket_id());
+	if (!rr_qp_ctx) {
+		CS_LOG_ERR("failed allocate memory for private queue pair");
+		return -ENOMEM;
+	}
+
+	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
+
+	return 0;
+}
+
+static int
+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+struct rte_cryptodev_scheduler_ops ops = {
+	slave_attach,
+	slave_detach,
+	scheduler_start,
+	scheduler_stop,
+	scheduler_config_qp,
+	scheduler_create_private_ctx
+};
+
+struct rte_cryptodev_scheduler scheduler = {
+		.name = "roundrobin-scheduler",
+		.description = "scheduler which will round robin burst across "
+				"slave crypto devices",
+		.ops = &ops
+};
+
+
+struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index aa4539a..f4cddff 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,7 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +78,9 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+
+	RTE_CRYPTODEV_TYPE_COUNT
 };
 
 extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..ee34688 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -98,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
@@ -145,6 +145,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER)  += -lrte_pmd_crypto_scheduler
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v5] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-17 10:57   ` [dpdk-dev] [PATCH v4] " Fan Zhang
@ 2017-01-17 13:19     ` Fan Zhang
  2017-01-17 14:09       ` Declan Doherty
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
  0 siblings, 2 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-17 13:19 UTC (permalink / raw)
  To: dev; +Cc: pablo.de.lara.guarch, Declan Doherty

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

As the initial version, the scheduler PMD currently supports only the
Round-robin mode, which distributes the enqueued burst of crypto ops
among its slaves in a round-robin manner. This mode may help to fill
the throughput gap between the physical core and the existing cryptodevs
to increase the overall performance. Moreover, the scheduler PMD is
provided the APIs for user to create his/her own scheduler.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
- Scheduler PMD shares same EAL commandline options as other cryptodevs.
  However, apart from socket_id, the rest of cryptodev options are
  ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
  options are set as the minimum values of the attached slaves'. For
  example, a scheduler cryptodev is attached 2 cryptodevs with
  max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
  max_nb_queue_pairs will be automatically updated as 2.

- In addition, an extra option "slave" is added. The user can attach one
  or more slave cryptodevs initially by passing their names with this
  option. Here is an example:

  ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_
  mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,
  slave=aesni_mb_2" ...

  Remember the software cryptodevs to be attached shall be declared before
  the scheduler PMD, otherwise the scheduler will fail to locate the
  slave(s) and report error.

- The scheduler cryptodev cannot be started unless the scheduling mode
  is set and at least one slave is attached. Also, to configure the
  scheduler in the run-time, like attach/detach slave(s), change
  scheduling mode, or enable/disable crypto op ordering, one should stop
  the scheduler first, otherwise an error will be returned.

Changes in v5:
Fixed EOF whitespace warning.
Updated Copyright.

Changes in v4:
Fixed a few bugs.
Added slave EAL commandline option support.

Changes in v3:
Fixed config/common_base.

Changes in v2:
New approaches in API to suit future scheduling modes.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 config/common_base                                 |   6 +
 drivers/crypto/Makefile                            |   1 +
 drivers/crypto/scheduler/Makefile                  |  66 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 460 +++++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 167 +++++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 360 +++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 489 +++++++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 115 +++++
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 417 ++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   4 +
 mk/rte.app.mk                                      |   3 +-
 13 files changed, 2170 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

diff --git a/config/common_base b/config/common_base
index 8e9dcfa..3d33a2d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -409,6 +409,12 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for ZUC device
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 745c614..cdd3c94 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..6a7ac6a
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,66 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..e44bb47
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,460 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_jhash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_malloc.h>
+
+#include "scheduler_pmd_private.h"
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static uint32_t
+sync_caps(struct rte_cryptodev_capabilities *caps,
+		uint32_t nb_caps,
+		const struct rte_cryptodev_capabilities *slave_caps)
+{
+	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
+	uint32_t i;
+
+	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_slave_caps++;
+
+	if (nb_caps == 0) {
+		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
+		return nb_slave_caps;
+	}
+
+	for (i = 0; i < sync_nb_caps; i++) {
+		struct rte_cryptodev_capabilities *cap = &caps[i];
+		uint32_t j;
+
+		for (j = 0; j < nb_slave_caps; j++) {
+			const struct rte_cryptodev_capabilities *s_cap =
+					&slave_caps[i];
+
+			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
+					cap->sym.xform_type)
+				continue;
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (s_cap->sym.auth.algo !=
+						cap->sym.auth.algo)
+					continue;
+
+				cap->sym.auth.digest_size.min =
+					s_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					s_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					s_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					s_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+			}
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (s_cap->sym.cipher.algo !=
+						cap->sym.cipher.algo)
+					continue;
+
+			/* no common cap found */
+			break;
+		}
+
+		if (j < nb_slave_caps)
+			continue;
+
+		/* remove a uncommon cap from the array */
+		for (j = i; j < sync_nb_caps - 1; j++)
+			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
+
+		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
+		sync_nb_caps--;
+	}
+
+	return sync_nb_caps;
+}
+
+static int
+update_scheduler_capability(struct scheduler_ctx *sched_ctx)
+{
+	struct rte_cryptodev_capabilities tmp_caps[256] = {0};
+	uint32_t nb_caps = 0, i;
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
+		if (nb_caps == 0)
+			return -1;
+	}
+
+	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
+			sizeof(struct rte_cryptodev_capabilities) *
+			(nb_caps + 1), 0, SOCKET_ID_ANY);
+	if (!sched_ctx->capabilities)
+		return -ENOMEM;
+
+	rte_memcpy(sched_ctx->capabilities, tmp_caps,
+			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
+
+	return 0;
+}
+
+static void
+update_scheduler_feature_flag(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	dev->feature_flags = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		dev->feature_flags |= dev_info.feature_flags;
+	}
+}
+
+static void
+update_max_nb_qp(struct scheduler_ctx *sched_ctx)
+{
+	uint32_t i;
+	uint32_t max_nb_qp;
+
+	if (!sched_ctx->nb_slaves)
+		return;
+
+	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
+				dev_info.max_nb_queue_pairs : max_nb_qp;
+	}
+
+	sched_ctx->max_nb_queue_pairs = max_nb_qp;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	struct scheduler_slave *slave;
+	struct rte_cryptodev_info dev_info;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("Too many slaves attached");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++)
+		if (sched_ctx->slaves[i].dev_id == slave_id) {
+			CS_LOG_ERR("Slave already added");
+			return -ENOTSUP;
+		}
+
+	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
+
+	rte_cryptodev_info_get(slave_id, &dev_info);
+
+	slave->dev_id = slave_id;
+	slave->dev_type = dev_info.dev_type;
+	sched_ctx->nb_slaves++;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		slave->dev_id = 0;
+		slave->dev_type = 0;
+		sched_ctx->nb_slaves--;
+
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i, slave_pos;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
+		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
+			break;
+	if (slave_pos == sched_ctx->nb_slaves) {
+		CS_LOG_ERR("Cannot find slave");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
+		CS_LOG_ERR("Failed to detach slave");
+		return -ENOTSUP;
+	}
+
+	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
+		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
+				sizeof(struct scheduler_slave));
+	}
+	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
+			sizeof(struct scheduler_slave));
+	sched_ctx->nb_slaves--;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	int ret;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (mode == sched_ctx->mode && mode != CDEV_SCHED_MODE_USERDEFINED)
+		return 0;
+
+	switch (mode) {
+	case CDEV_SCHED_MODE_ROUNDROBIN:
+		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
+				roundrobin_scheduler) < 0) {
+			CS_LOG_ERR("Failed to load scheduler");
+			return -1;
+		}
+		break;
+	case CDEV_SCHED_MODE_MIGRATION:
+	case CDEV_SCHED_MODE_FALLBACK:
+	default:
+		CS_LOG_ERR("Not yet supported");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	ret = (*sched_ctx->ops.create_private_ctx)(dev);
+	if (ret < 0) {
+		CS_LOG_ERR("Unable to create scheduler private context");
+		return ret;
+	}
+
+	sched_ctx->mode = mode;
+
+	return 0;
+}
+
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->mode;
+}
+
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	sched_ctx->reordering_enabled = enable_reorder;
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return (int)sched_ctx->reordering_enabled;
+}
+
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler) {
+
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	/* check device stopped */
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Device should be stopped before loading scheduler");
+		return -EBUSY;
+	}
+
+	strncpy(sched_ctx->name, scheduler->name,
+			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+	strncpy(sched_ctx->description, scheduler->description,
+			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+
+	/* load scheduler instance operations functions */
+	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
+	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
+	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
+	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
+	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
+	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
+
+	return 0;
+}
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..a3957ec
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,167 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#include <rte_cryptodev_scheduler_operations.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum rte_cryptodev_scheduler_mode {
+	CDEV_SCHED_MODE_NOT_SET = 0,
+	CDEV_SCHED_MODE_USERDEFINED,
+	CDEV_SCHED_MODE_ROUNDROBIN,
+	CDEV_SCHED_MODE_MIGRATION,
+	CDEV_SCHED_MODE_FALLBACK,
+	CDEV_SCHED_MODE_MULTICORE,
+
+	CDEV_SCHED_MODE_COUNT /* number of modes */
+};
+
+#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
+
+struct rte_cryptodev_scheduler;
+
+/**
+ * Load a user defined scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		scheduler	Pointer to the user defined scheduler
+ *
+ * @return
+ *	0 if loading successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler);
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Detach a attached crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be detached
+ *
+ * @return
+ *	0 if detaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
+
+/**
+ * Set the crypto ops reordering feature on/off
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		enable_reorder	set the crypto op reordering feature
+ *				0: disable reordering
+ *				1: enable reordering
+ *
+ * @return
+ *	0 if setting successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder);
+
+/**
+ * Get the current crypto ops reordering feature
+ *
+ * @param	dev_id		The target scheduler device ID
+ *
+ * @return
+ *	0 if reordering is disabled
+ *	1 if reordering is enabled
+ *	negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct rte_cryptodev_scheduler {
+	const char *name;
+	const char *description;
+
+	struct rte_cryptodev_scheduler_ops *ops;
+};
+
+extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
new file mode 100644
index 0000000..93cf123
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
@@ -0,0 +1,71 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+
+typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
+typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
+
+typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
+		struct rte_cryptodev *dev, uint16_t qp_id);
+
+typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
+		struct rte_cryptodev *dev);
+
+struct rte_cryptodev_scheduler_ops {
+	rte_cryptodev_scheduler_slave_attach_t slave_attach;
+	rte_cryptodev_scheduler_slave_attach_t slave_detach;
+
+	rte_cryptodev_scheduler_start_t scheduler_start;
+	rte_cryptodev_scheduler_stop_t scheduler_stop;
+
+	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
+
+	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..09e589a
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,12 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_load_user_scheduler;
+	rte_cryptodev_scheduler_slave_attach;
+	rte_cryptodev_scheduler_slave_detach;
+	rte_crpytodev_scheduler_mode_set;
+	rte_crpytodev_scheduler_mode_get;
+	rte_cryptodev_scheduler_ordering_set;
+	rte_cryptodev_scheduler_ordering_get;
+
+} DPDK_17.02;
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..5108572
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,360 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+struct scheduler_init_params {
+	struct rte_crypto_vdev_init_params def_p;
+	uint32_t nb_slaves;
+	uint8_t slaves[MAX_SLAVES_NUM];
+};
+
+#define RTE_CRYPTODEV_VDEV_NAME			("name")
+#define RTE_CRYPTODEV_VDEV_SLAVE		("slave")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG	("max_nb_queue_pairs")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG	("max_nb_sessions")
+#define RTE_CRYPTODEV_VDEV_SOCKET_ID		("socket_id")
+
+const char *scheduler_valid_params[] = {
+	RTE_CRYPTODEV_VDEV_NAME,
+	RTE_CRYPTODEV_VDEV_SLAVE,
+	RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+	RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+	RTE_CRYPTODEV_VDEV_SOCKET_ID
+};
+
+static uint16_t
+scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint16_t
+scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static int
+attach_init_slaves(uint8_t scheduler_id,
+		const uint8_t *slaves, const uint8_t nb_slaves)
+{
+	uint8_t i;
+
+	for (i = 0; i < nb_slaves; i++) {
+		struct rte_cryptodev *dev =
+				rte_cryptodev_pmd_get_dev(slaves[i]);
+		int status = rte_cryptodev_scheduler_slave_attach(
+				scheduler_id, slaves[i]);
+
+		if (status < 0 || !dev) {
+			CS_LOG_ERR("Failed to attach slave cryptodev "
+					"%u.\n", slaves[i]);
+			return status;
+		}
+
+		RTE_LOG(INFO, PMD, "  Attached slave cryptodev %s\n",
+				dev->data->name);
+	}
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct scheduler_init_params *init_params)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (init_params->def_p.name[0] == '\0') {
+		int ret = rte_cryptodev_pmd_create_dev_name(
+				init_params->def_p.name,
+				RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+		if (ret < 0) {
+			CS_LOG_ERR("failed to create unique name");
+			return ret;
+		}
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_ctx),
+			init_params->def_p.socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->enqueue_burst = scheduler_enqueue_burst;
+	dev->dequeue_burst = scheduler_dequeue_burst;
+
+	sched_ctx = dev->data->dev_private;
+	sched_ctx->max_nb_queue_pairs =
+			init_params->def_p.max_nb_queue_pairs;
+
+	return attach_init_slaves(dev->data->dev_id, init_params->slaves,
+			init_params->nb_slaves);
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	sched_ctx = dev->data->dev_private;
+
+	if (sched_ctx->nb_slaves) {
+		uint32_t i;
+
+		for (i = 0; i < sched_ctx->nb_slaves; i++)
+			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
+					sched_ctx->slaves[i].dev_id);
+	}
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+static uint8_t
+number_of_sockets(void)
+{
+	int sockets = 0;
+	int i;
+	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
+
+	for (i = 0; ((i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL)); i++) {
+		if (sockets < ms[i].socket_id)
+			sockets = ms[i].socket_id;
+	}
+
+	/* Number of sockets = maximum socket_id + 1 */
+	return ++sockets;
+}
+
+/** Parse integer from integer argument */
+static int
+parse_integer_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	int *i = (int *) extra_args;
+
+	*i = atoi(value);
+	if (*i < 0) {
+		CS_LOG_ERR("Argument has to be positive.\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse name */
+static int
+parse_name_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct rte_crypto_vdev_init_params *params = extra_args;
+
+	if (strlen(value) >= RTE_CRYPTODEV_NAME_MAX_LEN - 1) {
+		CS_LOG_ERR("Invalid name %s, should be less than "
+				"%u bytes.\n", value,
+				RTE_CRYPTODEV_NAME_MAX_LEN - 1);
+		return -1;
+	}
+
+	strncpy(params->name, value, RTE_CRYPTODEV_NAME_MAX_LEN);
+
+	return 0;
+}
+
+/** Parse slave */
+static int
+parse_slave_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct scheduler_init_params *param = extra_args;
+	struct rte_cryptodev *dev =
+			rte_cryptodev_pmd_get_named_dev(value);
+
+	if (!dev) {
+		RTE_LOG(ERR, PMD, "Invalid slave name %s.\n", value);
+		return -1;
+	}
+
+	if (param->nb_slaves >= MAX_SLAVES_NUM - 1) {
+		CS_LOG_ERR("Too many slaves.\n");
+		return -1;
+	}
+
+	param->slaves[param->nb_slaves] = dev->data->dev_id;
+	param->nb_slaves++;
+
+	return 0;
+}
+
+static int
+scheduler_parse_init_params(struct scheduler_init_params *params,
+		const char *input_args)
+{
+	struct rte_kvargs *kvlist = NULL;
+	int ret = 0;
+
+	if (params == NULL)
+		return -EINVAL;
+
+	if (input_args) {
+		kvlist = rte_kvargs_parse(input_args,
+				scheduler_valid_params);
+		if (kvlist == NULL)
+			return -1;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_queue_pairs);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_sessions);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SOCKET_ID,
+				&parse_integer_arg,
+				&params->def_p.socket_id);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_NAME,
+				&parse_name_arg,
+				&params->def_p);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SLAVE,
+				&parse_slave_arg, params);
+		if (ret < 0)
+			goto free_kvlist;
+
+		if (params->def_p.socket_id >= number_of_sockets()) {
+			CDEV_LOG_ERR("Invalid socket id specified to create "
+				"the virtual crypto device on");
+			goto free_kvlist;
+		}
+	}
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct scheduler_init_params init_params = {
+		.def_p = {
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+			rte_socket_id(),
+			""
+		},
+		.nb_slaves = 0,
+		.slaves = {0}
+	};
+
+	scheduler_parse_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.def_p.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.def_p.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.def_p.max_nb_sessions);
+	if (init_params.def_p.name[0] != '\0')
+		RTE_LOG(INFO, PMD, "  User defined name = %s\n",
+			init_params.def_p.name);
+
+	return cryptodev_scheduler_create(name, &init_params);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int>");
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..af6d8fe
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,489 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static int
+update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (sched_ctx->reordering_enabled) {
+		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (!buff_size)
+			return 0;
+
+		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+			dev->data->dev_id, qp_id) < 0) {
+			CS_LOG_ERR("failed to create unique reorder buffer "
+					"name");
+			return -ENOMEM;
+		}
+
+		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
+				rte_socket_id(), buff_size);
+		if (!qp_ctx->reorder_buf) {
+			CS_LOG_ERR("failed to create reorder buffer");
+			return -ENOMEM;
+		}
+	} else {
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+	}
+
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = update_reorder_buff(dev, i);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to update reorder buffer");
+			return ret;
+		}
+	}
+
+	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
+		CS_LOG_ERR("Scheduler mode is not set");
+		return -1;
+	}
+
+	if (!sched_ctx->nb_slaves) {
+		CS_LOG_ERR("No slave in the scheduler");
+		return -1;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
+			CS_LOG_ERR("Failed to attach slave");
+			return -ENOTSUP;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
+
+	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
+		CS_LOG_ERR("Scheduler start failed");
+		return -1;
+	}
+
+	/* start all slaves */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to start slave dev %u",
+					slave_dev_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	if (!dev->data->dev_started)
+		return;
+
+	/* stop all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->dev_stop)(slave_dev);
+	}
+
+	if (*sched_ctx->ops.scheduler_stop)
+		(*sched_ctx->ops.scheduler_stop)(dev);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if (*sched_ctx->ops.slave_detach)
+			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
+	}
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	/* the dev should be stopped before being closed */
+	if (dev->data->dev_started)
+		return -EBUSY;
+
+	/* close all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
+		if (ret < 0)
+			return ret;
+	}
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (qp_ctx->private_qp_ctx) {
+			rte_free(qp_ctx->private_qp_ctx);
+			qp_ctx->private_qp_ctx = NULL;
+		}
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+		struct rte_cryptodev_stats slave_stats = {0};
+
+		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
+
+		stats->enqueued_count += slave_stats.enqueued_count;
+		stats->dequeued_count += slave_stats.dequeued_count;
+
+		stats->enqueue_err_count += slave_stats.enqueue_err_count;
+		stats->dequeue_err_count += slave_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->stats_reset)(slave_dev);
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t max_nb_sessions = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+	uint32_t i;
+
+	if (!dev_info)
+		return;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev_info slave_info;
+
+		rte_cryptodev_info_get(slave_dev_id, &slave_info);
+		max_nb_sessions = slave_info.sym.max_nb_sessions <
+				max_nb_sessions ?
+				slave_info.sym.max_nb_sessions :
+				max_nb_sessions;
+	}
+
+	dev_info->dev_type = dev->dev_type;
+	dev_info->feature_flags = dev->feature_flags;
+	dev_info->capabilities = sched_ctx->capabilities;
+	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
+	dev_info->sym.max_nb_sessions = max_nb_sessions;
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (!qp_ctx)
+		return 0;
+
+	if (qp_ctx->reorder_buf)
+		rte_reorder_free(qp_ctx->reorder_buf);
+	if (qp_ctx->private_qp_ctx)
+		rte_free(qp_ctx->private_qp_ctx);
+
+	rte_free(qp_ctx);
+	dev->data->queue_pairs[qp_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx;
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"CRYTO_SCHE PMD %u QP %u",
+			dev->data->dev_id, qp_id) < 0) {
+		CS_LOG_ERR("Failed to create unique queue pair name");
+		return -EFAULT;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
+			socket_id);
+	if (qp_ctx == NULL)
+		return -ENOMEM;
+
+	dev->data->queue_pairs[qp_id] = qp_ctx;
+
+	if (*sched_ctx->ops.config_queue_pair) {
+		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
+			CS_LOG_ERR("Unable to configure queue pair");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static uint32_t
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sess(struct scheduler_ctx *sched_ctx,
+		struct rte_crypto_sym_xform *xform,
+		struct scheduler_session *sess,
+		uint32_t create)
+{
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct scheduler_slave *slave = &sched_ctx->slaves[i];
+		struct rte_cryptodev *dev = &rte_cryptodev_globals->
+				devs[slave->dev_id];
+
+		if (sess->sessions[i]) {
+			if (create)
+				continue;
+			/* !create */
+			(*dev->dev_ops->session_clear)(dev,
+					(void *)sess->sessions[i]);
+			sess->sessions[i] = NULL;
+		} else {
+			if (!create)
+				continue;
+			/* create */
+			sess->sessions[i] =
+					rte_cryptodev_sym_session_create(
+							slave->dev_id, xform);
+			if (!sess->sessions[i]) {
+				config_slave_sess(sched_ctx, NULL, sess, 0);
+				return -1;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	config_slave_sess(sched_ctx, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..ac4690e
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+struct scheduler_slave {
+	uint8_t dev_id;
+	uint16_t qp_id;
+	uint32_t nb_inflight_cops;
+
+	enum rte_cryptodev_type dev_type;
+};
+
+struct scheduler_ctx {
+	void *private_ctx;
+	/**< private scheduler context pointer */
+
+	struct rte_cryptodev_capabilities *capabilities;
+	uint32_t nb_capabilities;
+
+	uint32_t max_nb_queue_pairs;
+
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	enum rte_cryptodev_scheduler_mode mode;
+
+	struct rte_cryptodev_scheduler_ops ops;
+
+	uint8_t reordering_enabled;
+
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+} __rte_cache_aligned;
+
+struct scheduler_qp_ctx {
+	void *private_qp_ctx;
+
+	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
+	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
+
+	struct rte_reorder_buffer *reorder_buf;
+	uint32_t seqn;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
new file mode 100644
index 0000000..c5ff6f5
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -0,0 +1,417 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_scheduler_operations.h>
+
+#include "scheduler_pmd_private.h"
+
+struct roundround_scheduler_ctx {
+};
+
+struct rr_scheduler_qp_ctx {
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	uint32_t last_enq_slave_idx;
+	uint32_t last_deq_slave_idx;
+};
+
+static uint16_t
+schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+static uint16_t
+schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			gen_qp_ctx->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++) {
+		rte_prefetch0(ops[i]->sym->session);
+		rte_prefetch0(ops[i]->sym->m_src);
+	}
+
+	for (i = 0; i < nb_ops - 8; i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 4]->sym->m_src);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->m_src);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->m_src);
+		rte_prefetch0(ops[i + 7]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_enq_slave_idx = 0;
+
+	return processed_ops;
+}
+
+
+static uint16_t
+schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	struct scheduler_slave *slave;
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t nb_deq_ops;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	last_slave_idx += 1;
+	if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+		last_slave_idx = 0;
+
+	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+static uint16_t
+schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
+	struct scheduler_slave *slave;
+	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint16_t nb_deq_ops, nb_drained_mbufs;
+	const uint16_t nb_op_ops = nb_ops;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t i;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, op_ops, nb_ops);
+
+	rr_qp_ctx->last_deq_slave_idx += 1;
+	if (unlikely(rr_qp_ctx->last_deq_slave_idx >= rr_qp_ctx->nb_slaves))
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; i < nb_deq_ops - 8; i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_memcpy(mbuf1->buf_addr, &op_ops[i+1], sizeof(op_ops[i+1]));
+		rte_memcpy(mbuf2->buf_addr, &op_ops[i+2], sizeof(op_ops[i+2]));
+		rte_memcpy(mbuf3->buf_addr, &op_ops[i+3], sizeof(op_ops[i+3]));
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
+			nb_ops);
+	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; i < nb_drained_mbufs - 8; i += 4) {
+		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->buf_addr;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->buf_addr;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->buf_addr;
+
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 1]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 2]->buf_addr = NULL;
+		*(struct rte_crypto_op **)reorder_mbufs[i + 3]->buf_addr = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_drained_mbufs; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->buf_addr;
+		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
+	}
+
+	return nb_drained_mbufs;
+}
+
+static int
+slave_attach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+slave_detach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+scheduler_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+		struct rr_scheduler_qp_ctx *rr_qp_ctx =
+				qp_ctx->private_qp_ctx;
+		uint32_t j;
+		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
+
+		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
+				sizeof(struct scheduler_slave));
+		for (j = 0; j < sched_ctx->nb_slaves; j++) {
+			rr_qp_ctx->slaves[j].dev_id =
+					sched_ctx->slaves[i].dev_id;
+			rr_qp_ctx->slaves[j].qp_id = qp_id;
+		}
+
+		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
+
+		rr_qp_ctx->last_enq_slave_idx = 0;
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+		if (sched_ctx->reordering_enabled) {
+			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
+			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
+		} else {
+			qp_ctx->schedule_enqueue = &schedule_enqueue;
+			qp_ctx->schedule_dequeue = &schedule_dequeue;
+		}
+	}
+
+	return 0;
+}
+
+static int
+scheduler_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+static int
+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+	struct rr_scheduler_qp_ctx *rr_qp_ctx;
+
+	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
+			rte_socket_id());
+	if (!rr_qp_ctx) {
+		CS_LOG_ERR("failed allocate memory for private queue pair");
+		return -ENOMEM;
+	}
+
+	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
+
+	return 0;
+}
+
+static int
+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+struct rte_cryptodev_scheduler_ops ops = {
+	slave_attach,
+	slave_detach,
+	scheduler_start,
+	scheduler_stop,
+	scheduler_config_qp,
+	scheduler_create_private_ctx
+};
+
+struct rte_cryptodev_scheduler scheduler = {
+		.name = "roundrobin-scheduler",
+		.description = "scheduler which will round robin burst across "
+				"slave crypto devices",
+		.ops = &ops
+};
+
+
+struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f4e66e6..379b8e5 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -66,6 +66,7 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
 /**< KASUMI PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -77,6 +78,9 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
+
+	RTE_CRYPTODEV_TYPE_COUNT
 };
 
 extern const char **rte_cyptodev_names;
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..ee34688 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -98,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
@@ -145,6 +145,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER)  += -lrte_pmd_crypto_scheduler
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v5] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-17 13:19     ` [dpdk-dev] [PATCH v5] crypto/scheduler: " Fan Zhang
@ 2017-01-17 14:09       ` Declan Doherty
  2017-01-17 20:21         ` Thomas Monjalon
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
  1 sibling, 1 reply; 42+ messages in thread
From: Declan Doherty @ 2017-01-17 14:09 UTC (permalink / raw)
  To: Fan Zhang, dev; +Cc: pablo.de.lara.guarch

On 17/01/17 13:19, Fan Zhang wrote:
> This patch provides the initial implementation of the scheduler poll mode
> driver using DPDK cryptodev framework.
>
> Scheduler PMD is used to schedule and enqueue the crypto ops to the
> hardware and/or software crypto devices attached to it (slaves). The
> dequeue operation from the slave(s), and the possible dequeued crypto op
> reordering, are then carried out by the scheduler.
>
> As the initial version, the scheduler PMD currently supports only the
> Round-robin mode, which distributes the enqueued burst of crypto ops
> among its slaves in a round-robin manner. This mode may help to fill
> the throughput gap between the physical core and the existing cryptodevs
> to increase the overall performance. Moreover, the scheduler PMD is
> provided the APIs for user to create his/her own scheduler.
>
> Build instructions:
> To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
> CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base
>
> Notice:
> - Scheduler PMD shares same EAL commandline options as other cryptodevs.
>   However, apart from socket_id, the rest of cryptodev options are
>   ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
>   options are set as the minimum values of the attached slaves'. For
>   example, a scheduler cryptodev is attached 2 cryptodevs with
>   max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
>   max_nb_queue_pairs will be automatically updated as 2.
>
> - In addition, an extra option "slave" is added. The user can attach one
>   or more slave cryptodevs initially by passing their names with this
>   option. Here is an example:
>
>   ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_
>   mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,
>   slave=aesni_mb_2" ...
>
>   Remember the software cryptodevs to be attached shall be declared before
>   the scheduler PMD, otherwise the scheduler will fail to locate the
>   slave(s) and report error.
>
> - The scheduler cryptodev cannot be started unless the scheduling mode
>   is set and at least one slave is attached. Also, to configure the
>   scheduler in the run-time, like attach/detach slave(s), change
>   scheduling mode, or enable/disable crypto op ordering, one should stop
>   the scheduler first, otherwise an error will be returned.
>
> Changes in v5:
> Fixed EOF whitespace warning.
> Updated Copyright.
>
> Changes in v4:
> Fixed a few bugs.
> Added slave EAL commandline option support.
>
> Changes in v3:
> Fixed config/common_base.
>
> Changes in v2:
> New approaches in API to suit future scheduling modes.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
>  config/common_base                                 |   6 +
>  drivers/crypto/Makefile                            |   1 +
>  drivers/crypto/scheduler/Makefile                  |  66 +++
>  drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 460 +++++++++++++++++++
>  drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 167 +++++++
>  .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
>  .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
>  drivers/crypto/scheduler/scheduler_pmd.c           | 360 +++++++++++++++
>  drivers/crypto/scheduler/scheduler_pmd_ops.c       | 489 +++++++++++++++++++++
>  drivers/crypto/scheduler/scheduler_pmd_private.h   | 115 +++++
>  drivers/crypto/scheduler/scheduler_roundrobin.c    | 417 ++++++++++++++++++
>  lib/librte_cryptodev/rte_cryptodev.h               |   4 +
>  mk/rte.app.mk                                      |   3 +-
>  13 files changed, 2170 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/crypto/scheduler/Makefile
>  create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
>  create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
>  create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
>  create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
>  create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
>  create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
>  create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
>  create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c
>
> diff --git a/config/common_base b/config/common_base
> index 8e9dcfa..3d33a2d 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -409,6 +409,12 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
>  CONFIG_RTE_LIBRTE_PMD_KASUMI_DEBUG=n
>
>  #
> +# Compile PMD for Crypto Scheduler device
> +#
> +CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
> +CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
> +
> +#
>  # Compile PMD for ZUC device
>  #
>  CONFIG_RTE_LIBRTE_PMD_ZUC=n
> diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
> index 745c614..cdd3c94 100644
> --- a/drivers/crypto/Makefile
> +++ b/drivers/crypto/Makefile
> @@ -38,6 +38,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
> +DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
>
>  include $(RTE_SDK)/mk/rte.subdir.mk
> diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
> new file mode 100644
> index 0000000..6a7ac6a
> --- /dev/null
> +++ b/drivers/crypto/scheduler/Makefile
> @@ -0,0 +1,66 @@
> +#   BSD LICENSE
> +#
> +#   Copyright(c) 2017 Intel Corporation. All rights reserved.
> +#
> +#   Redistribution and use in source and binary forms, with or without
> +#   modification, are permitted provided that the following conditions
> +#   are met:
> +#
> +#     * Redistributions of source code must retain the above copyright
> +#       notice, this list of conditions and the following disclaimer.
> +#     * Redistributions in binary form must reproduce the above copyright
> +#       notice, this list of conditions and the following disclaimer in
> +#       the documentation and/or other materials provided with the
> +#       distribution.
> +#     * Neither the name of Intel Corporation nor the names of its
> +#       contributors may be used to endorse or promote products derived
> +#       from this software without specific prior written permission.
> +#
> +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_pmd_crypto_scheduler.a
> +
> +# build flags
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +
> +# library version
> +LIBABIVER := 1
> +
> +# versioning export map
> +EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
> +
> +#
> +# Export include files
> +#
> +SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
> +SYMLINK-y-include += rte_cryptodev_scheduler.h
> +
> +# library source files
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
> +
> +# library dependencies
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_ring
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> new file mode 100644
> index 0000000..e44bb47
> --- /dev/null
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> @@ -0,0 +1,460 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +#include <rte_jhash.h>
> +#include <rte_reorder.h>
> +#include <rte_cryptodev.h>
> +#include <rte_cryptodev_pmd.h>
> +#include <rte_cryptodev_scheduler.h>
> +#include <rte_malloc.h>
> +
> +#include "scheduler_pmd_private.h"
> +
> +/** update the scheduler pmd's capability with attaching device's
> + *  capability.
> + *  For each device to be attached, the scheduler's capability should be
> + *  the common capability set of all slaves
> + **/
> +static uint32_t
> +sync_caps(struct rte_cryptodev_capabilities *caps,
> +		uint32_t nb_caps,
> +		const struct rte_cryptodev_capabilities *slave_caps)
> +{
> +	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
> +	uint32_t i;
> +
> +	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
> +		nb_slave_caps++;
> +
> +	if (nb_caps == 0) {
> +		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
> +		return nb_slave_caps;
> +	}
> +
> +	for (i = 0; i < sync_nb_caps; i++) {
> +		struct rte_cryptodev_capabilities *cap = &caps[i];
> +		uint32_t j;
> +
> +		for (j = 0; j < nb_slave_caps; j++) {
> +			const struct rte_cryptodev_capabilities *s_cap =
> +					&slave_caps[i];
> +
> +			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
> +					cap->sym.xform_type)
> +				continue;
> +
> +			if (s_cap->sym.xform_type ==
> +					RTE_CRYPTO_SYM_XFORM_AUTH) {
> +				if (s_cap->sym.auth.algo !=
> +						cap->sym.auth.algo)
> +					continue;
> +
> +				cap->sym.auth.digest_size.min =
> +					s_cap->sym.auth.digest_size.min <
> +					cap->sym.auth.digest_size.min ?
> +					s_cap->sym.auth.digest_size.min :
> +					cap->sym.auth.digest_size.min;
> +				cap->sym.auth.digest_size.max =
> +					s_cap->sym.auth.digest_size.max <
> +					cap->sym.auth.digest_size.max ?
> +					s_cap->sym.auth.digest_size.max :
> +					cap->sym.auth.digest_size.max;
> +
> +			}
> +
> +			if (s_cap->sym.xform_type ==
> +					RTE_CRYPTO_SYM_XFORM_CIPHER)
> +				if (s_cap->sym.cipher.algo !=
> +						cap->sym.cipher.algo)
> +					continue;
> +
> +			/* no common cap found */
> +			break;
> +		}
> +
> +		if (j < nb_slave_caps)
> +			continue;
> +
> +		/* remove a uncommon cap from the array */
> +		for (j = i; j < sync_nb_caps - 1; j++)
> +			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
> +
> +		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
> +		sync_nb_caps--;
> +	}
> +
> +	return sync_nb_caps;
> +}
> +
> +static int
> +update_scheduler_capability(struct scheduler_ctx *sched_ctx)
> +{
> +	struct rte_cryptodev_capabilities tmp_caps[256] = {0};
> +	uint32_t nb_caps = 0, i;
> +
> +	if (sched_ctx->capabilities)
> +		rte_free(sched_ctx->capabilities);
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		struct rte_cryptodev_info dev_info;
> +
> +		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
> +
> +		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
> +		if (nb_caps == 0)
> +			return -1;
> +	}
> +
> +	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
> +			sizeof(struct rte_cryptodev_capabilities) *
> +			(nb_caps + 1), 0, SOCKET_ID_ANY);
> +	if (!sched_ctx->capabilities)
> +		return -ENOMEM;
> +
> +	rte_memcpy(sched_ctx->capabilities, tmp_caps,
> +			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
> +
> +	return 0;
> +}
> +
> +static void
> +update_scheduler_feature_flag(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +
> +	dev->feature_flags = 0;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		struct rte_cryptodev_info dev_info;
> +
> +		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
> +
> +		dev->feature_flags |= dev_info.feature_flags;
> +	}
> +}
> +
> +static void
> +update_max_nb_qp(struct scheduler_ctx *sched_ctx)
> +{
> +	uint32_t i;
> +	uint32_t max_nb_qp;
> +
> +	if (!sched_ctx->nb_slaves)
> +		return;
> +
> +	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		struct rte_cryptodev_info dev_info;
> +
> +		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
> +		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
> +				dev_info.max_nb_queue_pairs : max_nb_qp;
> +	}
> +
> +	sched_ctx->max_nb_queue_pairs = max_nb_qp;
> +}
> +
> +/** Attach a device to the scheduler. */
> +int
> +rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
> +{
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx;
> +	struct scheduler_slave *slave;
> +	struct rte_cryptodev_info dev_info;
> +	uint32_t i;
> +
> +	if (!dev) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->data->dev_started) {
> +		CS_LOG_ERR("Illegal operation");
> +		return -EBUSY;
> +	}
> +
> +	sched_ctx = dev->data->dev_private;
> +	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
> +		CS_LOG_ERR("Too many slaves attached");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++)
> +		if (sched_ctx->slaves[i].dev_id == slave_id) {
> +			CS_LOG_ERR("Slave already added");
> +			return -ENOTSUP;
> +		}
> +
> +	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
> +
> +	rte_cryptodev_info_get(slave_id, &dev_info);
> +
> +	slave->dev_id = slave_id;
> +	slave->dev_type = dev_info.dev_type;
> +	sched_ctx->nb_slaves++;
> +
> +	if (update_scheduler_capability(sched_ctx) < 0) {
> +		slave->dev_id = 0;
> +		slave->dev_type = 0;
> +		sched_ctx->nb_slaves--;
> +
> +		CS_LOG_ERR("capabilities update failed");
> +		return -ENOTSUP;
> +	}
> +
> +	update_scheduler_feature_flag(dev);
> +
> +	update_max_nb_qp(sched_ctx);
> +
> +	return 0;
> +}
> +
> +int
> +rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
> +{
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx;
> +	uint32_t i, slave_pos;
> +
> +	if (!dev) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->data->dev_started) {
> +		CS_LOG_ERR("Illegal operation");
> +		return -EBUSY;
> +	}
> +
> +	sched_ctx = dev->data->dev_private;
> +
> +	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
> +		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
> +			break;
> +	if (slave_pos == sched_ctx->nb_slaves) {
> +		CS_LOG_ERR("Cannot find slave");
> +		return -ENOTSUP;
> +	}
> +
> +	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
> +		CS_LOG_ERR("Failed to detach slave");
> +		return -ENOTSUP;
> +	}
> +
> +	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
> +		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
> +				sizeof(struct scheduler_slave));
> +	}
> +	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
> +			sizeof(struct scheduler_slave));
> +	sched_ctx->nb_slaves--;
> +
> +	if (update_scheduler_capability(sched_ctx) < 0) {
> +		CS_LOG_ERR("capabilities update failed");
> +		return -ENOTSUP;
> +	}
> +
> +	update_scheduler_feature_flag(dev);
> +
> +	update_max_nb_qp(sched_ctx);
> +
> +	return 0;
> +}
> +
> +int
> +rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
> +		enum rte_cryptodev_scheduler_mode mode)
> +{
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx;
> +	int ret;
> +
> +	if (!dev) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->data->dev_started) {
> +		CS_LOG_ERR("Illegal operation");
> +		return -EBUSY;
> +	}
> +
> +	sched_ctx = dev->data->dev_private;
> +
> +	if (mode == sched_ctx->mode && mode != CDEV_SCHED_MODE_USERDEFINED)
> +		return 0;
> +
> +	switch (mode) {
> +	case CDEV_SCHED_MODE_ROUNDROBIN:
> +		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
> +				roundrobin_scheduler) < 0) {
> +			CS_LOG_ERR("Failed to load scheduler");
> +			return -1;
> +		}
> +		break;
> +	case CDEV_SCHED_MODE_MIGRATION:
> +	case CDEV_SCHED_MODE_FALLBACK:
> +	default:
> +		CS_LOG_ERR("Not yet supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (sched_ctx->private_ctx)
> +		rte_free(sched_ctx->private_ctx);
> +
> +	ret = (*sched_ctx->ops.create_private_ctx)(dev);
> +	if (ret < 0) {
> +		CS_LOG_ERR("Unable to create scheduler private context");
> +		return ret;
> +	}
> +
> +	sched_ctx->mode = mode;
> +
> +	return 0;
> +}
> +
> +enum rte_cryptodev_scheduler_mode
> +rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
> +{
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx;
> +
> +	if (!dev) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	sched_ctx = dev->data->dev_private;
> +
> +	return sched_ctx->mode;
> +}
> +
> +int
> +rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
> +		uint32_t enable_reorder)
> +{
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx;
> +
> +	if (!dev) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->data->dev_started) {
> +		CS_LOG_ERR("Illegal operation");
> +		return -EBUSY;
> +	}
> +
> +	sched_ctx = dev->data->dev_private;
> +
> +	sched_ctx->reordering_enabled = enable_reorder;
> +
> +	return 0;
> +}
> +
> +int
> +rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
> +{
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx;
> +
> +	if (!dev) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
> +		CS_LOG_ERR("Operation not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	sched_ctx = dev->data->dev_private;
> +
> +	return (int)sched_ctx->reordering_enabled;
> +}
> +
> +int
> +rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
> +		struct rte_cryptodev_scheduler *scheduler) {
> +
> +	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +
> +	/* check device stopped */
> +	if (dev->data->dev_started) {
> +		CS_LOG_ERR("Device should be stopped before loading scheduler");
> +		return -EBUSY;
> +	}
> +
> +	strncpy(sched_ctx->name, scheduler->name,
> +			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
> +	strncpy(sched_ctx->description, scheduler->description,
> +			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
> +
> +	/* load scheduler instance operations functions */
> +	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
> +	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
> +	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
> +	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
> +	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
> +	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
> +
> +	return 0;
> +}
> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> new file mode 100644
> index 0000000..a3957ec
> --- /dev/null
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> @@ -0,0 +1,167 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_CRYPTO_SCHEDULER_H
> +#define _RTE_CRYPTO_SCHEDULER_H
> +
> +#include <rte_cryptodev_scheduler_operations.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Crypto scheduler PMD operation modes
> + */
> +enum rte_cryptodev_scheduler_mode {
> +	CDEV_SCHED_MODE_NOT_SET = 0,
> +	CDEV_SCHED_MODE_USERDEFINED,
> +	CDEV_SCHED_MODE_ROUNDROBIN,
> +	CDEV_SCHED_MODE_MIGRATION,
> +	CDEV_SCHED_MODE_FALLBACK,
> +	CDEV_SCHED_MODE_MULTICORE,
> +
> +	CDEV_SCHED_MODE_COUNT /* number of modes */
> +};
> +
> +#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
> +#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
> +
> +struct rte_cryptodev_scheduler;
> +
> +/**
> + * Load a user defined scheduler
> + *
> + * @param	scheduler_id	The target scheduler device ID
> + *		scheduler	Pointer to the user defined scheduler
> + *
> + * @return
> + *	0 if loading successful, negative integer if otherwise.
> + */
> +int
> +rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
> +		struct rte_cryptodev_scheduler *scheduler);
> +
> +/**
> + * Attach a pre-configured crypto device to the scheduler
> + *
> + * @param	scheduler_id	The target scheduler device ID
> + *		slave_id	crypto device ID to be attached
> + *
> + * @return
> + *	0 if attaching successful, negative int if otherwise.
> + */
> +int
> +rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
> +
> +/**
> + * Detach a attached crypto device to the scheduler
> + *
> + * @param	scheduler_id	The target scheduler device ID
> + *		slave_id	crypto device ID to be detached
> + *
> + * @return
> + *	0 if detaching successful, negative int if otherwise.
> + */
> +int
> +rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
> +
> +/**
> + * Set the scheduling mode
> + *
> + * @param	scheduler_id	The target scheduler device ID
> + *		mode		The scheduling mode
> + *
> + * @return
> + *	0 if attaching successful, negative integer if otherwise.
> + */
> +int
> +rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
> +		enum rte_cryptodev_scheduler_mode mode);
> +
> +/**
> + * Get the current scheduling mode
> + *
> + * @param	scheduler_id	The target scheduler device ID
> + *		mode		Pointer to write the scheduling mode
> + */
> +enum rte_cryptodev_scheduler_mode
> +rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
> +
> +/**
> + * Set the crypto ops reordering feature on/off
> + *
> + * @param	dev_id		The target scheduler device ID
> + *		enable_reorder	set the crypto op reordering feature
> + *				0: disable reordering
> + *				1: enable reordering
> + *
> + * @return
> + *	0 if setting successful, negative integer if otherwise.
> + */
> +int
> +rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
> +		uint32_t enable_reorder);
> +
> +/**
> + * Get the current crypto ops reordering feature
> + *
> + * @param	dev_id		The target scheduler device ID
> + *
> + * @return
> + *	0 if reordering is disabled
> + *	1 if reordering is enabled
> + *	negative integer if otherwise.
> + */
> +int
> +rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
> +
> +typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
> +		struct rte_crypto_op **ops, uint16_t nb_ops);
> +
> +typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
> +		struct rte_crypto_op **ops, uint16_t nb_ops);
> +
> +struct rte_cryptodev_scheduler {
> +	const char *name;
> +	const char *description;
> +
> +	struct rte_cryptodev_scheduler_ops *ops;
> +};
> +
> +extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif /* _RTE_CRYPTO_SCHEDULER_H */
> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
> new file mode 100644
> index 0000000..93cf123
> --- /dev/null
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
> @@ -0,0 +1,71 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
> +#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
> +
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
> +		struct rte_cryptodev *dev, uint8_t slave_id);
> +typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
> +		struct rte_cryptodev *dev, uint8_t slave_id);
> +
> +typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
> +typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
> +
> +typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
> +		struct rte_cryptodev *dev, uint16_t qp_id);
> +
> +typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
> +		struct rte_cryptodev *dev);
> +
> +struct rte_cryptodev_scheduler_ops {
> +	rte_cryptodev_scheduler_slave_attach_t slave_attach;
> +	rte_cryptodev_scheduler_slave_attach_t slave_detach;
> +
> +	rte_cryptodev_scheduler_start_t scheduler_start;
> +	rte_cryptodev_scheduler_stop_t scheduler_stop;
> +
> +	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
> +
> +	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
> +};
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
> diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> new file mode 100644
> index 0000000..09e589a
> --- /dev/null
> +++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> @@ -0,0 +1,12 @@
> +DPDK_17.02 {
> +	global:
> +
> +	rte_cryptodev_scheduler_load_user_scheduler;
> +	rte_cryptodev_scheduler_slave_attach;
> +	rte_cryptodev_scheduler_slave_detach;
> +	rte_crpytodev_scheduler_mode_set;
> +	rte_crpytodev_scheduler_mode_get;
> +	rte_cryptodev_scheduler_ordering_set;
> +	rte_cryptodev_scheduler_ordering_get;
> +
> +} DPDK_17.02;
> diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
> new file mode 100644
> index 0000000..5108572
> --- /dev/null
> +++ b/drivers/crypto/scheduler/scheduler_pmd.c
> @@ -0,0 +1,360 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +#include <rte_common.h>
> +#include <rte_hexdump.h>
> +#include <rte_cryptodev.h>
> +#include <rte_cryptodev_pmd.h>
> +#include <rte_vdev.h>
> +#include <rte_malloc.h>
> +#include <rte_cpuflags.h>
> +#include <rte_reorder.h>
> +#include <rte_cryptodev_scheduler.h>
> +
> +#include "scheduler_pmd_private.h"
> +
> +struct scheduler_init_params {
> +	struct rte_crypto_vdev_init_params def_p;
> +	uint32_t nb_slaves;
> +	uint8_t slaves[MAX_SLAVES_NUM];
> +};
> +
> +#define RTE_CRYPTODEV_VDEV_NAME			("name")
> +#define RTE_CRYPTODEV_VDEV_SLAVE		("slave")
> +#define RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG	("max_nb_queue_pairs")
> +#define RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG	("max_nb_sessions")
> +#define RTE_CRYPTODEV_VDEV_SOCKET_ID		("socket_id")
> +
> +const char *scheduler_valid_params[] = {
> +	RTE_CRYPTODEV_VDEV_NAME,
> +	RTE_CRYPTODEV_VDEV_SLAVE,
> +	RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
> +	RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
> +	RTE_CRYPTODEV_VDEV_SOCKET_ID
> +};
> +
> +static uint16_t
> +scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
> +		uint16_t nb_ops)
> +{
> +	struct scheduler_qp_ctx *qp_ctx = queue_pair;
> +	uint16_t processed_ops;
> +
> +	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
> +			nb_ops);
> +
> +	return processed_ops;
> +}
> +
> +static uint16_t
> +scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
> +		uint16_t nb_ops)
> +{
> +	struct scheduler_qp_ctx *qp_ctx = queue_pair;
> +	uint16_t processed_ops;
> +
> +	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
> +			nb_ops);
> +
> +	return processed_ops;
> +}
> +
> +static int
> +attach_init_slaves(uint8_t scheduler_id,
> +		const uint8_t *slaves, const uint8_t nb_slaves)
> +{
> +	uint8_t i;
> +
> +	for (i = 0; i < nb_slaves; i++) {
> +		struct rte_cryptodev *dev =
> +				rte_cryptodev_pmd_get_dev(slaves[i]);
> +		int status = rte_cryptodev_scheduler_slave_attach(
> +				scheduler_id, slaves[i]);
> +
> +		if (status < 0 || !dev) {
> +			CS_LOG_ERR("Failed to attach slave cryptodev "
> +					"%u.\n", slaves[i]);
> +			return status;
> +		}
> +
> +		RTE_LOG(INFO, PMD, "  Attached slave cryptodev %s\n",
> +				dev->data->name);
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +cryptodev_scheduler_create(const char *name,
> +	struct scheduler_init_params *init_params)
> +{
> +	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +	struct rte_cryptodev *dev;
> +	struct scheduler_ctx *sched_ctx;
> +
> +	if (init_params->def_p.name[0] == '\0') {
> +		int ret = rte_cryptodev_pmd_create_dev_name(
> +				init_params->def_p.name,
> +				RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
> +
> +		if (ret < 0) {
> +			CS_LOG_ERR("failed to create unique name");
> +			return ret;
> +		}
> +	}
> +
> +	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
> +			sizeof(struct scheduler_ctx),
> +			init_params->def_p.socket_id);
> +	if (dev == NULL) {
> +		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
> +			name);
> +		return -EFAULT;
> +	}
> +
> +	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
> +	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
> +
> +	dev->enqueue_burst = scheduler_enqueue_burst;
> +	dev->dequeue_burst = scheduler_dequeue_burst;
> +
> +	sched_ctx = dev->data->dev_private;
> +	sched_ctx->max_nb_queue_pairs =
> +			init_params->def_p.max_nb_queue_pairs;
> +
> +	return attach_init_slaves(dev->data->dev_id, init_params->slaves,
> +			init_params->nb_slaves);
> +}
> +
> +static int
> +cryptodev_scheduler_remove(const char *name)
> +{
> +	struct rte_cryptodev *dev;
> +	struct scheduler_ctx *sched_ctx;
> +
> +	if (name == NULL)
> +		return -EINVAL;
> +
> +	dev = rte_cryptodev_pmd_get_named_dev(name);
> +	if (dev == NULL)
> +		return -EINVAL;
> +
> +	sched_ctx = dev->data->dev_private;
> +
> +	if (sched_ctx->nb_slaves) {
> +		uint32_t i;
> +
> +		for (i = 0; i < sched_ctx->nb_slaves; i++)
> +			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
> +					sched_ctx->slaves[i].dev_id);
> +	}
> +
> +	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
> +		"socket %u\n", name, rte_socket_id());
> +
> +	return 0;
> +}
> +
> +static uint8_t
> +number_of_sockets(void)
> +{
> +	int sockets = 0;
> +	int i;
> +	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
> +
> +	for (i = 0; ((i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL)); i++) {
> +		if (sockets < ms[i].socket_id)
> +			sockets = ms[i].socket_id;
> +	}
> +
> +	/* Number of sockets = maximum socket_id + 1 */
> +	return ++sockets;
> +}
> +
> +/** Parse integer from integer argument */
> +static int
> +parse_integer_arg(const char *key __rte_unused,
> +		const char *value, void *extra_args)
> +{
> +	int *i = (int *) extra_args;
> +
> +	*i = atoi(value);
> +	if (*i < 0) {
> +		CS_LOG_ERR("Argument has to be positive.\n");
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +/** Parse name */
> +static int
> +parse_name_arg(const char *key __rte_unused,
> +		const char *value, void *extra_args)
> +{
> +	struct rte_crypto_vdev_init_params *params = extra_args;
> +
> +	if (strlen(value) >= RTE_CRYPTODEV_NAME_MAX_LEN - 1) {
> +		CS_LOG_ERR("Invalid name %s, should be less than "
> +				"%u bytes.\n", value,
> +				RTE_CRYPTODEV_NAME_MAX_LEN - 1);
> +		return -1;
> +	}
> +
> +	strncpy(params->name, value, RTE_CRYPTODEV_NAME_MAX_LEN);
> +
> +	return 0;
> +}
> +
> +/** Parse slave */
> +static int
> +parse_slave_arg(const char *key __rte_unused,
> +		const char *value, void *extra_args)
> +{
> +	struct scheduler_init_params *param = extra_args;
> +	struct rte_cryptodev *dev =
> +			rte_cryptodev_pmd_get_named_dev(value);
> +
> +	if (!dev) {
> +		RTE_LOG(ERR, PMD, "Invalid slave name %s.\n", value);
> +		return -1;
> +	}
> +
> +	if (param->nb_slaves >= MAX_SLAVES_NUM - 1) {
> +		CS_LOG_ERR("Too many slaves.\n");
> +		return -1;
> +	}
> +
> +	param->slaves[param->nb_slaves] = dev->data->dev_id;
> +	param->nb_slaves++;
> +
> +	return 0;
> +}
> +
> +static int
> +scheduler_parse_init_params(struct scheduler_init_params *params,
> +		const char *input_args)
> +{
> +	struct rte_kvargs *kvlist = NULL;
> +	int ret = 0;
> +
> +	if (params == NULL)
> +		return -EINVAL;
> +
> +	if (input_args) {
> +		kvlist = rte_kvargs_parse(input_args,
> +				scheduler_valid_params);
> +		if (kvlist == NULL)
> +			return -1;
> +
> +		ret = rte_kvargs_process(kvlist,
> +				RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
> +				&parse_integer_arg,
> +				&params->def_p.max_nb_queue_pairs);
> +		if (ret < 0)
> +			goto free_kvlist;
> +
> +		ret = rte_kvargs_process(kvlist,
> +				RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
> +				&parse_integer_arg,
> +				&params->def_p.max_nb_sessions);
> +		if (ret < 0)
> +			goto free_kvlist;
> +
> +		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SOCKET_ID,
> +				&parse_integer_arg,
> +				&params->def_p.socket_id);
> +		if (ret < 0)
> +			goto free_kvlist;
> +
> +		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_NAME,
> +				&parse_name_arg,
> +				&params->def_p);
> +		if (ret < 0)
> +			goto free_kvlist;
> +
> +		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SLAVE,
> +				&parse_slave_arg, params);
> +		if (ret < 0)
> +			goto free_kvlist;
> +
> +		if (params->def_p.socket_id >= number_of_sockets()) {
> +			CDEV_LOG_ERR("Invalid socket id specified to create "
> +				"the virtual crypto device on");
> +			goto free_kvlist;
> +		}
> +	}
> +
> +free_kvlist:
> +	rte_kvargs_free(kvlist);
> +	return ret;
> +}
> +
> +static int
> +cryptodev_scheduler_probe(const char *name, const char *input_args)
> +{
> +	struct scheduler_init_params init_params = {
> +		.def_p = {
> +			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
> +			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
> +			rte_socket_id(),
> +			""
> +		},
> +		.nb_slaves = 0,
> +		.slaves = {0}
> +	};
> +
> +	scheduler_parse_init_params(&init_params, input_args);
> +
> +	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
> +			init_params.def_p.socket_id);
> +	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
> +			init_params.def_p.max_nb_queue_pairs);
> +	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
> +			init_params.def_p.max_nb_sessions);
> +	if (init_params.def_p.name[0] != '\0')
> +		RTE_LOG(INFO, PMD, "  User defined name = %s\n",
> +			init_params.def_p.name);
> +
> +	return cryptodev_scheduler_create(name, &init_params);
> +}
> +
> +static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
> +	.probe = cryptodev_scheduler_probe,
> +	.remove = cryptodev_scheduler_remove
> +};
> +
> +RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
> +	cryptodev_scheduler_pmd_drv);
> +RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
> +	"max_nb_queue_pairs=<int> "
> +	"max_nb_sessions=<int> "
> +	"socket_id=<int>");
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> new file mode 100644
> index 0000000..af6d8fe
> --- /dev/null
> +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> @@ -0,0 +1,489 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +#include <string.h>
> +
> +#include <rte_config.h>
> +#include <rte_common.h>
> +#include <rte_malloc.h>
> +#include <rte_dev.h>
> +#include <rte_cryptodev.h>
> +#include <rte_cryptodev_pmd.h>
> +#include <rte_reorder.h>
> +
> +#include "scheduler_pmd_private.h"
> +
> +/** Configure device */
> +static int
> +scheduler_pmd_config(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +	int ret = 0;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev *slave_dev =
> +				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +
> +		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
> +		if (ret < 0)
> +			break;
> +	}
> +
> +	return ret;
> +}
> +
> +static int
> +update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
> +
> +	if (sched_ctx->reordering_enabled) {
> +		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
> +
> +		if (qp_ctx->reorder_buf) {
> +			rte_reorder_free(qp_ctx->reorder_buf);
> +			qp_ctx->reorder_buf = NULL;
> +		}
> +
> +		if (!buff_size)
> +			return 0;
> +
> +		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
> +			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
> +			dev->data->dev_id, qp_id) < 0) {
> +			CS_LOG_ERR("failed to create unique reorder buffer "
> +					"name");
> +			return -ENOMEM;
> +		}
> +
> +		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
> +				rte_socket_id(), buff_size);
> +		if (!qp_ctx->reorder_buf) {
> +			CS_LOG_ERR("failed to create reorder buffer");
> +			return -ENOMEM;
> +		}
> +	} else {
> +		if (qp_ctx->reorder_buf) {
> +			rte_reorder_free(qp_ctx->reorder_buf);
> +			qp_ctx->reorder_buf = NULL;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +/** Start device */
> +static int
> +scheduler_pmd_start(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +	int ret;
> +
> +	if (dev->data->dev_started)
> +		return 0;
> +
> +	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
> +		ret = update_reorder_buff(dev, i);
> +		if (ret < 0) {
> +			CS_LOG_ERR("Failed to update reorder buffer");
> +			return ret;
> +		}
> +	}
> +
> +	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
> +		CS_LOG_ERR("Scheduler mode is not set");
> +		return -1;
> +	}
> +
> +	if (!sched_ctx->nb_slaves) {
> +		CS_LOG_ERR("No slave in the scheduler");
> +		return -1;
> +	}
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +
> +		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
> +			CS_LOG_ERR("Failed to attach slave");
> +			return -ENOTSUP;
> +		}
> +	}
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
> +
> +	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
> +		CS_LOG_ERR("Scheduler start failed");
> +		return -1;
> +	}
> +
> +	/* start all slaves */
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev *slave_dev =
> +				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +
> +		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
> +		if (ret < 0) {
> +			CS_LOG_ERR("Failed to start slave dev %u",
> +					slave_dev_id);
> +			return ret;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +/** Stop device */
> +static void
> +scheduler_pmd_stop(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +
> +	if (!dev->data->dev_started)
> +		return;
> +
> +	/* stop all slaves first */
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev *slave_dev =
> +				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +
> +		(*slave_dev->dev_ops->dev_stop)(slave_dev);
> +	}
> +
> +	if (*sched_ctx->ops.scheduler_stop)
> +		(*sched_ctx->ops.scheduler_stop)(dev);
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +
> +		if (*sched_ctx->ops.slave_detach)
> +			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
> +	}
> +}
> +
> +/** Close device */
> +static int
> +scheduler_pmd_close(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +	int ret;
> +
> +	/* the dev should be stopped before being closed */
> +	if (dev->data->dev_started)
> +		return -EBUSY;
> +
> +	/* close all slaves first */
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev *slave_dev =
> +				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +
> +		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
> +		if (ret < 0)
> +			return ret;
> +	}
> +
> +	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
> +		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
> +
> +		if (qp_ctx->reorder_buf) {
> +			rte_reorder_free(qp_ctx->reorder_buf);
> +			qp_ctx->reorder_buf = NULL;
> +		}
> +
> +		if (qp_ctx->private_qp_ctx) {
> +			rte_free(qp_ctx->private_qp_ctx);
> +			qp_ctx->private_qp_ctx = NULL;
> +		}
> +	}
> +
> +	if (sched_ctx->private_ctx)
> +		rte_free(sched_ctx->private_ctx);
> +
> +	if (sched_ctx->capabilities)
> +		rte_free(sched_ctx->capabilities);
> +
> +	return 0;
> +}
> +
> +/** Get device statistics */
> +static void
> +scheduler_pmd_stats_get(struct rte_cryptodev *dev,
> +	struct rte_cryptodev_stats *stats)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev *slave_dev =
> +				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +		struct rte_cryptodev_stats slave_stats = {0};
> +
> +		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
> +
> +		stats->enqueued_count += slave_stats.enqueued_count;
> +		stats->dequeued_count += slave_stats.dequeued_count;
> +
> +		stats->enqueue_err_count += slave_stats.enqueue_err_count;
> +		stats->dequeue_err_count += slave_stats.dequeue_err_count;
> +	}
> +}
> +
> +/** Reset device statistics */
> +static void
> +scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t i;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev *slave_dev =
> +				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +
> +		(*slave_dev->dev_ops->stats_reset)(slave_dev);
> +	}
> +}
> +
> +/** Get device info */
> +static void
> +scheduler_pmd_info_get(struct rte_cryptodev *dev,
> +		struct rte_cryptodev_info *dev_info)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	uint32_t max_nb_sessions = sched_ctx->nb_slaves ? UINT32_MAX : 0;
> +	uint32_t i;
> +
> +	if (!dev_info)
> +		return;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
> +		struct rte_cryptodev_info slave_info;
> +
> +		rte_cryptodev_info_get(slave_dev_id, &slave_info);
> +		max_nb_sessions = slave_info.sym.max_nb_sessions <
> +				max_nb_sessions ?
> +				slave_info.sym.max_nb_sessions :
> +				max_nb_sessions;
> +	}
> +
> +	dev_info->dev_type = dev->dev_type;
> +	dev_info->feature_flags = dev->feature_flags;
> +	dev_info->capabilities = sched_ctx->capabilities;
> +	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
> +	dev_info->sym.max_nb_sessions = max_nb_sessions;
> +}
> +
> +/** Release queue pair */
> +static int
> +scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
> +{
> +	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
> +
> +	if (!qp_ctx)
> +		return 0;
> +
> +	if (qp_ctx->reorder_buf)
> +		rte_reorder_free(qp_ctx->reorder_buf);
> +	if (qp_ctx->private_qp_ctx)
> +		rte_free(qp_ctx->private_qp_ctx);
> +
> +	rte_free(qp_ctx);
> +	dev->data->queue_pairs[qp_id] = NULL;
> +
> +	return 0;
> +}
> +
> +/** Setup a queue pair */
> +static int
> +scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
> +	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +	struct scheduler_qp_ctx *qp_ctx;
> +	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
> +
> +	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
> +			"CRYTO_SCHE PMD %u QP %u",
> +			dev->data->dev_id, qp_id) < 0) {
> +		CS_LOG_ERR("Failed to create unique queue pair name");
> +		return -EFAULT;
> +	}
> +
> +	/* Free memory prior to re-allocation if needed. */
> +	if (dev->data->queue_pairs[qp_id] != NULL)
> +		scheduler_pmd_qp_release(dev, qp_id);
> +
> +	/* Allocate the queue pair data structure. */
> +	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
> +			socket_id);
> +	if (qp_ctx == NULL)
> +		return -ENOMEM;
> +
> +	dev->data->queue_pairs[qp_id] = qp_ctx;
> +
> +	if (*sched_ctx->ops.config_queue_pair) {
> +		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
> +			CS_LOG_ERR("Unable to configure queue pair");
> +			return -1;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +/** Start queue pair */
> +static int
> +scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint16_t queue_pair_id)
> +{
> +	return -ENOTSUP;
> +}
> +
> +/** Stop queue pair */
> +static int
> +scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint16_t queue_pair_id)
> +{
> +	return -ENOTSUP;
> +}
> +
> +/** Return the number of allocated queue pairs */
> +static uint32_t
> +scheduler_pmd_qp_count(struct rte_cryptodev *dev)
> +{
> +	return dev->data->nb_queue_pairs;
> +}
> +
> +static uint32_t
> +scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
> +{
> +	return sizeof(struct scheduler_session);
> +}
> +
> +static int
> +config_slave_sess(struct scheduler_ctx *sched_ctx,
> +		struct rte_crypto_sym_xform *xform,
> +		struct scheduler_session *sess,
> +		uint32_t create)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i < sched_ctx->nb_slaves; i++) {
> +		struct scheduler_slave *slave = &sched_ctx->slaves[i];
> +		struct rte_cryptodev *dev = &rte_cryptodev_globals->
> +				devs[slave->dev_id];
> +
> +		if (sess->sessions[i]) {
> +			if (create)
> +				continue;
> +			/* !create */
> +			(*dev->dev_ops->session_clear)(dev,
> +					(void *)sess->sessions[i]);
> +			sess->sessions[i] = NULL;
> +		} else {
> +			if (!create)
> +				continue;
> +			/* create */
> +			sess->sessions[i] =
> +					rte_cryptodev_sym_session_create(
> +							slave->dev_id, xform);
> +			if (!sess->sessions[i]) {
> +				config_slave_sess(sched_ctx, NULL, sess, 0);
> +				return -1;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +/** Clear the memory of session so it doesn't leave key material behind */
> +static void
> +scheduler_pmd_session_clear(struct rte_cryptodev *dev,
> +	void *sess)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +
> +	config_slave_sess(sched_ctx, NULL, sess, 0);
> +
> +	memset(sess, 0, sizeof(struct scheduler_session));
> +}
> +
> +static void *
> +scheduler_pmd_session_configure(struct rte_cryptodev *dev,
> +	struct rte_crypto_sym_xform *xform, void *sess)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +
> +	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
> +		CS_LOG_ERR("unabled to config sym session");
> +		return NULL;
> +	}
> +
> +	return sess;
> +}
> +
> +struct rte_cryptodev_ops scheduler_pmd_ops = {
> +		.dev_configure		= scheduler_pmd_config,
> +		.dev_start		= scheduler_pmd_start,
> +		.dev_stop		= scheduler_pmd_stop,
> +		.dev_close		= scheduler_pmd_close,
> +
> +		.stats_get		= scheduler_pmd_stats_get,
> +		.stats_reset		= scheduler_pmd_stats_reset,
> +
> +		.dev_infos_get		= scheduler_pmd_info_get,
> +
> +		.queue_pair_setup	= scheduler_pmd_qp_setup,
> +		.queue_pair_release	= scheduler_pmd_qp_release,
> +		.queue_pair_start	= scheduler_pmd_qp_start,
> +		.queue_pair_stop	= scheduler_pmd_qp_stop,
> +		.queue_pair_count	= scheduler_pmd_qp_count,
> +
> +		.session_get_size	= scheduler_pmd_session_get_size,
> +		.session_configure	= scheduler_pmd_session_configure,
> +		.session_clear		= scheduler_pmd_session_clear,
> +};
> +
> +struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
> new file mode 100644
> index 0000000..ac4690e
> --- /dev/null
> +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
> @@ -0,0 +1,115 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _SCHEDULER_PMD_PRIVATE_H
> +#define _SCHEDULER_PMD_PRIVATE_H
> +
> +#include <rte_hash.h>
> +#include <rte_reorder.h>
> +#include <rte_cryptodev_scheduler.h>
> +
> +/**< Maximum number of bonded devices per devices */
> +#ifndef MAX_SLAVES_NUM
> +#define MAX_SLAVES_NUM				(8)
> +#endif
> +
> +#define PER_SLAVE_BUFF_SIZE			(256)
> +
> +#define CS_LOG_ERR(fmt, args...)					\
> +	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
> +		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
> +		__func__, __LINE__, ## args)
> +
> +#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
> +#define CS_LOG_INFO(fmt, args...)					\
> +	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
> +		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
> +		__func__, __LINE__, ## args)
> +
> +#define CS_LOG_DBG(fmt, args...)					\
> +	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
> +		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
> +		__func__, __LINE__, ## args)
> +#else
> +#define CS_LOG_INFO(fmt, args...)
> +#define CS_LOG_DBG(fmt, args...)
> +#endif
> +
> +struct scheduler_slave {
> +	uint8_t dev_id;
> +	uint16_t qp_id;
> +	uint32_t nb_inflight_cops;
> +
> +	enum rte_cryptodev_type dev_type;
> +};
> +
> +struct scheduler_ctx {
> +	void *private_ctx;
> +	/**< private scheduler context pointer */
> +
> +	struct rte_cryptodev_capabilities *capabilities;
> +	uint32_t nb_capabilities;
> +
> +	uint32_t max_nb_queue_pairs;
> +
> +	struct scheduler_slave slaves[MAX_SLAVES_NUM];
> +	uint32_t nb_slaves;
> +
> +	enum rte_cryptodev_scheduler_mode mode;
> +
> +	struct rte_cryptodev_scheduler_ops ops;
> +
> +	uint8_t reordering_enabled;
> +
> +	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
> +	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
> +} __rte_cache_aligned;
> +
> +struct scheduler_qp_ctx {
> +	void *private_qp_ctx;
> +
> +	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
> +	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
> +
> +	struct rte_reorder_buffer *reorder_buf;
> +	uint32_t seqn;
> +} __rte_cache_aligned;
> +
> +struct scheduler_session {
> +	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
> +};
> +
> +/** device specific operations function pointer structure */
> +extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
> +
> +#endif /* _SCHEDULER_PMD_PRIVATE_H */
> diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
> new file mode 100644
> index 0000000..c5ff6f5
> --- /dev/null
> +++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
> @@ -0,0 +1,417 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_cryptodev.h>
> +#include <rte_malloc.h>
> +#include <rte_cryptodev_scheduler_operations.h>
> +
> +#include "scheduler_pmd_private.h"
> +
> +struct roundround_scheduler_ctx {
> +};
> +
> +struct rr_scheduler_qp_ctx {
> +	struct scheduler_slave slaves[MAX_SLAVES_NUM];
> +	uint32_t nb_slaves;
> +
> +	uint32_t last_enq_slave_idx;
> +	uint32_t last_deq_slave_idx;
> +};
> +
> +static uint16_t
> +schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
> +{
> +	struct rr_scheduler_qp_ctx *rr_qp_ctx =
> +			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
> +	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
> +	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
> +	uint16_t i, processed_ops;
> +	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
> +
> +	if (unlikely(nb_ops == 0))
> +		return 0;
> +
> +	for (i = 0; i < nb_ops && i < 4; i++)
> +		rte_prefetch0(ops[i]->sym->session);
> +
> +	for (i = 0; i < nb_ops - 8; i += 4) {
> +		sess0 = (struct scheduler_session *)
> +				ops[i]->sym->session->_private;
> +		sess1 = (struct scheduler_session *)
> +				ops[i+1]->sym->session->_private;
> +		sess2 = (struct scheduler_session *)
> +				ops[i+2]->sym->session->_private;
> +		sess3 = (struct scheduler_session *)
> +				ops[i+3]->sym->session->_private;
> +
> +		ops[i]->sym->session = sess0->sessions[slave_idx];
> +		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
> +		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
> +		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
> +
> +		rte_prefetch0(ops[i + 4]->sym->session);
> +		rte_prefetch0(ops[i + 5]->sym->session);
> +		rte_prefetch0(ops[i + 6]->sym->session);
> +		rte_prefetch0(ops[i + 7]->sym->session);
> +	}
> +
> +	for (; i < nb_ops; i++) {
> +		sess0 = (struct scheduler_session *)
> +				ops[i]->sym->session->_private;
> +		ops[i]->sym->session = sess0->sessions[slave_idx];
> +	}
> +
> +	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
> +			slave->qp_id, ops, nb_ops);
> +
> +	slave->nb_inflight_cops += processed_ops;
> +
> +	rr_qp_ctx->last_enq_slave_idx += 1;
> +	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
> +		rr_qp_ctx->last_enq_slave_idx = 0;
> +
> +	return processed_ops;
> +}
> +
> +static uint16_t
> +schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
> +		uint16_t nb_ops)
> +{
> +	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
> +	struct rr_scheduler_qp_ctx *rr_qp_ctx =
> +			gen_qp_ctx->private_qp_ctx;
> +	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
> +	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
> +	uint16_t i, processed_ops;
> +	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
> +
> +	if (unlikely(nb_ops == 0))
> +		return 0;
> +
> +	for (i = 0; i < nb_ops && i < 4; i++) {
> +		rte_prefetch0(ops[i]->sym->session);
> +		rte_prefetch0(ops[i]->sym->m_src);
> +	}
> +
> +	for (i = 0; i < nb_ops - 8; i += 4) {
> +		sess0 = (struct scheduler_session *)
> +				ops[i]->sym->session->_private;
> +		sess1 = (struct scheduler_session *)
> +				ops[i+1]->sym->session->_private;
> +		sess2 = (struct scheduler_session *)
> +				ops[i+2]->sym->session->_private;
> +		sess3 = (struct scheduler_session *)
> +				ops[i+3]->sym->session->_private;
> +
> +		ops[i]->sym->session = sess0->sessions[slave_idx];
> +		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
> +		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
> +		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
> +		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
> +		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
> +		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
> +		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
> +
> +		rte_prefetch0(ops[i + 4]->sym->session);
> +		rte_prefetch0(ops[i + 4]->sym->m_src);
> +		rte_prefetch0(ops[i + 5]->sym->session);
> +		rte_prefetch0(ops[i + 5]->sym->m_src);
> +		rte_prefetch0(ops[i + 6]->sym->session);
> +		rte_prefetch0(ops[i + 6]->sym->m_src);
> +		rte_prefetch0(ops[i + 7]->sym->session);
> +		rte_prefetch0(ops[i + 7]->sym->m_src);
> +	}
> +
> +	for (; i < nb_ops; i++) {
> +		sess0 = (struct scheduler_session *)
> +				ops[i]->sym->session->_private;
> +		ops[i]->sym->session = sess0->sessions[slave_idx];
> +		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
> +	}
> +
> +	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
> +			slave->qp_id, ops, nb_ops);
> +
> +	slave->nb_inflight_cops += processed_ops;
> +
> +	rr_qp_ctx->last_enq_slave_idx += 1;
> +	if (unlikely(rr_qp_ctx->last_enq_slave_idx >= rr_qp_ctx->nb_slaves))
> +		rr_qp_ctx->last_enq_slave_idx = 0;
> +
> +	return processed_ops;
> +}
> +
> +
> +static uint16_t
> +schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
> +{
> +	struct rr_scheduler_qp_ctx *rr_qp_ctx =
> +			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
> +	struct scheduler_slave *slave;
> +	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
> +	uint16_t nb_deq_ops;
> +
> +	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
> +		do {
> +			last_slave_idx += 1;
> +
> +			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
> +				last_slave_idx = 0;
> +			/* looped back, means no inflight cops in the queue */
> +			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
> +				return 0;
> +		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
> +				== 0);
> +	}
> +
> +	slave = &rr_qp_ctx->slaves[last_slave_idx];
> +
> +	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
> +			slave->qp_id, ops, nb_ops);
> +
> +	last_slave_idx += 1;
> +	if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
> +		last_slave_idx = 0;
> +
> +	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
> +
> +	slave->nb_inflight_cops -= nb_deq_ops;
> +
> +	return nb_deq_ops;
> +}
> +
> +static uint16_t
> +schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
> +		uint16_t nb_ops)
> +{
> +	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
> +	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
> +	struct scheduler_slave *slave;
> +	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
> +	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
> +	uint16_t nb_deq_ops, nb_drained_mbufs;
> +	const uint16_t nb_op_ops = nb_ops;
> +	struct rte_crypto_op *op_ops[nb_op_ops];
> +	struct rte_mbuf *reorder_mbufs[nb_op_ops];
> +	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
> +	uint16_t i;
> +
> +	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
> +		do {
> +			last_slave_idx += 1;
> +
> +			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
> +				last_slave_idx = 0;
> +			/* looped back, means no inflight cops in the queue */
> +			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
> +				return 0;
> +		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
> +				== 0);
> +	}
> +
> +	slave = &rr_qp_ctx->slaves[last_slave_idx];
> +
> +	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
> +			slave->qp_id, op_ops, nb_ops);
> +
> +	rr_qp_ctx->last_deq_slave_idx += 1;
> +	if (unlikely(rr_qp_ctx->last_deq_slave_idx >= rr_qp_ctx->nb_slaves))
> +		rr_qp_ctx->last_deq_slave_idx = 0;
> +
> +	slave->nb_inflight_cops -= nb_deq_ops;
> +
> +	for (i = 0; i < nb_deq_ops && i < 4; i++)
> +		rte_prefetch0(op_ops[i]->sym->m_src);
> +
> +	for (i = 0; i < nb_deq_ops - 8; i += 4) {
> +		mbuf0 = op_ops[i]->sym->m_src;
> +		mbuf1 = op_ops[i + 1]->sym->m_src;
> +		mbuf2 = op_ops[i + 2]->sym->m_src;
> +		mbuf3 = op_ops[i + 3]->sym->m_src;
> +
> +		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
> +		rte_memcpy(mbuf1->buf_addr, &op_ops[i+1], sizeof(op_ops[i+1]));
> +		rte_memcpy(mbuf2->buf_addr, &op_ops[i+2], sizeof(op_ops[i+2]));
> +		rte_memcpy(mbuf3->buf_addr, &op_ops[i+3], sizeof(op_ops[i+3]));
> +
> +		rte_reorder_insert(reorder_buff, mbuf0);
> +		rte_reorder_insert(reorder_buff, mbuf1);
> +		rte_reorder_insert(reorder_buff, mbuf2);
> +		rte_reorder_insert(reorder_buff, mbuf3);
> +
> +		rte_prefetch0(op_ops[i + 4]->sym->m_src);
> +		rte_prefetch0(op_ops[i + 5]->sym->m_src);
> +		rte_prefetch0(op_ops[i + 6]->sym->m_src);
> +		rte_prefetch0(op_ops[i + 7]->sym->m_src);
> +	}
> +
> +	for (; i < nb_deq_ops; i++) {
> +		mbuf0 = op_ops[i]->sym->m_src;
> +		rte_memcpy(mbuf0->buf_addr, &op_ops[i], sizeof(op_ops[i]));
> +		rte_reorder_insert(reorder_buff, mbuf0);
> +	}
> +
> +	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
> +			nb_ops);
> +	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
> +		rte_prefetch0(reorder_mbufs[i]);
> +
> +	for (i = 0; i < nb_drained_mbufs - 8; i += 4) {
> +		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr;
> +		ops[i + 1] = *(struct rte_crypto_op **)
> +			reorder_mbufs[i + 1]->buf_addr;
> +		ops[i + 2] = *(struct rte_crypto_op **)
> +			reorder_mbufs[i + 2]->buf_addr;
> +		ops[i + 3] = *(struct rte_crypto_op **)
> +			reorder_mbufs[i + 3]->buf_addr;
> +
> +		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
> +		*(struct rte_crypto_op **)reorder_mbufs[i + 1]->buf_addr = NULL;
> +		*(struct rte_crypto_op **)reorder_mbufs[i + 2]->buf_addr = NULL;
> +		*(struct rte_crypto_op **)reorder_mbufs[i + 3]->buf_addr = NULL;
> +
> +		rte_prefetch0(reorder_mbufs[i + 4]);
> +		rte_prefetch0(reorder_mbufs[i + 5]);
> +		rte_prefetch0(reorder_mbufs[i + 6]);
> +		rte_prefetch0(reorder_mbufs[i + 7]);
> +	}
> +
> +	for (; i < nb_drained_mbufs; i++) {
> +		ops[i] = *(struct rte_crypto_op **)
> +			reorder_mbufs[i]->buf_addr;
> +		*(struct rte_crypto_op **)reorder_mbufs[i]->buf_addr = NULL;
> +	}
> +
> +	return nb_drained_mbufs;
> +}
> +
> +static int
> +slave_attach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t slave_id)
> +{
> +	return 0;
> +}
> +
> +static int
> +slave_detach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t slave_id)
> +{
> +	return 0;
> +}
> +
> +static int
> +scheduler_start(struct rte_cryptodev *dev)
> +{
> +	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +
> +	uint16_t i;
> +
> +	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
> +		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
> +		struct rr_scheduler_qp_ctx *rr_qp_ctx =
> +				qp_ctx->private_qp_ctx;
> +		uint32_t j;
> +		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
> +
> +		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
> +				sizeof(struct scheduler_slave));
> +		for (j = 0; j < sched_ctx->nb_slaves; j++) {
> +			rr_qp_ctx->slaves[j].dev_id =
> +					sched_ctx->slaves[i].dev_id;
> +			rr_qp_ctx->slaves[j].qp_id = qp_id;
> +		}
> +
> +		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
> +
> +		rr_qp_ctx->last_enq_slave_idx = 0;
> +		rr_qp_ctx->last_deq_slave_idx = 0;
> +
> +		if (sched_ctx->reordering_enabled) {
> +			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
> +			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
> +		} else {
> +			qp_ctx->schedule_enqueue = &schedule_enqueue;
> +			qp_ctx->schedule_dequeue = &schedule_dequeue;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +scheduler_stop(__rte_unused struct rte_cryptodev *dev)
> +{
> +	return 0;
> +}
> +
> +static int
> +scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
> +{
> +	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
> +	struct rr_scheduler_qp_ctx *rr_qp_ctx;
> +
> +	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
> +			rte_socket_id());
> +	if (!rr_qp_ctx) {
> +		CS_LOG_ERR("failed allocate memory for private queue pair");
> +		return -ENOMEM;
> +	}
> +
> +	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
> +
> +	return 0;
> +}
> +
> +static int
> +scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
> +{
> +	return 0;
> +}
> +
> +struct rte_cryptodev_scheduler_ops ops = {
> +	slave_attach,
> +	slave_detach,
> +	scheduler_start,
> +	scheduler_stop,
> +	scheduler_config_qp,
> +	scheduler_create_private_ctx
> +};
> +
> +struct rte_cryptodev_scheduler scheduler = {
> +		.name = "roundrobin-scheduler",
> +		.description = "scheduler which will round robin burst across "
> +				"slave crypto devices",
> +		.ops = &ops
> +};
> +
> +
> +struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index f4e66e6..379b8e5 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -66,6 +66,7 @@ extern "C" {
>  /**< KASUMI PMD device name */
>  #define CRYPTODEV_NAME_ZUC_PMD		crypto_zuc
>  /**< KASUMI PMD device name */
> +#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
>
>  /** Crypto device type */
>  enum rte_cryptodev_type {
> @@ -77,6 +78,9 @@ enum rte_cryptodev_type {
>  	RTE_CRYPTODEV_KASUMI_PMD,	/**< KASUMI PMD */
>  	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
>  	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
> +	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
> +
> +	RTE_CRYPTODEV_TYPE_COUNT
>  };
>
>  extern const char **rte_cyptodev_names;
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index f75f0e2..ee34688 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
>
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
> -_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
> @@ -98,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
>
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
> @@ -145,6 +145,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -L$(LIBSSO_KASUMI_PATH)/build -lsso_kasumi
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER)  += -lrte_pmd_crypto_scheduler
>  endif # CONFIG_RTE_LIBRTE_CRYPTODEV
>
>  endif # !CONFIG_RTE_BUILD_SHARED_LIBS
>

Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v5] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-17 14:09       ` Declan Doherty
@ 2017-01-17 20:21         ` Thomas Monjalon
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Monjalon @ 2017-01-17 20:21 UTC (permalink / raw)
  To: Declan Doherty; +Cc: dev, Fan Zhang, pablo.de.lara.guarch

2017-01-17 14:09, Declan Doherty:
> On 17/01/17 13:19, Fan Zhang wrote:
> > This patch provides the initial implementation of the scheduler poll mode
> > driver using DPDK cryptodev framework.
> >
> > Scheduler PMD is used to schedule and enqueue the crypto ops to the
> > hardware and/or software crypto devices attached to it (slaves). The
> > dequeue operation from the slave(s), and the possible dequeued crypto op
> > reordering, are then carried out by the scheduler.
> >
> > As the initial version, the scheduler PMD currently supports only the
> > Round-robin mode, which distributes the enqueued burst of crypto ops
> > among its slaves in a round-robin manner. This mode may help to fill
> > the throughput gap between the physical core and the existing cryptodevs
> > to increase the overall performance. Moreover, the scheduler PMD is
> > provided the APIs for user to create his/her own scheduler.
> >
> > Build instructions:
> > To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
> > CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base
> >
> > Notice:
> > - Scheduler PMD shares same EAL commandline options as other cryptodevs.
> >   However, apart from socket_id, the rest of cryptodev options are
> >   ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
> >   options are set as the minimum values of the attached slaves'. For
> >   example, a scheduler cryptodev is attached 2 cryptodevs with
> >   max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
> >   max_nb_queue_pairs will be automatically updated as 2.
> >
> > - In addition, an extra option "slave" is added. The user can attach one
> >   or more slave cryptodevs initially by passing their names with this
> >   option. Here is an example:
> >
> >   ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_
> >   mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,
> >   slave=aesni_mb_2" ...
> >
> >   Remember the software cryptodevs to be attached shall be declared before
> >   the scheduler PMD, otherwise the scheduler will fail to locate the
> >   slave(s) and report error.
> >
> > - The scheduler cryptodev cannot be started unless the scheduling mode
> >   is set and at least one slave is attached. Also, to configure the
> >   scheduler in the run-time, like attach/detach slave(s), change
> >   scheduling mode, or enable/disable crypto op ordering, one should stop
> >   the scheduler first, otherwise an error will be returned.
> >
> > Changes in v5:
> > Fixed EOF whitespace warning.
> > Updated Copyright.
> >
> > Changes in v4:
> > Fixed a few bugs.
> > Added slave EAL commandline option support.
> >
> > Changes in v3:
> > Fixed config/common_base.
> >
> > Changes in v2:
> > New approaches in API to suit future scheduling modes.
> >
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > ---
> >  config/common_base                                 |   6 +
> >  drivers/crypto/Makefile                            |   1 +
> >  drivers/crypto/scheduler/Makefile                  |  66 +++
> >  drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 460 +++++++++++++++++++
> >  drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 167 +++++++
> >  .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
> >  .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
> >  drivers/crypto/scheduler/scheduler_pmd.c           | 360 +++++++++++++++
> >  drivers/crypto/scheduler/scheduler_pmd_ops.c       | 489 +++++++++++++++++++++
> >  drivers/crypto/scheduler/scheduler_pmd_private.h   | 115 +++++
> >  drivers/crypto/scheduler/scheduler_roundrobin.c    | 417 ++++++++++++++++++
> >  lib/librte_cryptodev/rte_cryptodev.h               |   4 +
> >  mk/rte.app.mk                                      |   3 +-
> >  13 files changed, 2170 insertions(+), 1 deletion(-)
> >  create mode 100644 drivers/crypto/scheduler/Makefile
> >  create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> >  create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> >  create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
> >  create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> >  create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
> >  create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
> >  create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
> >  create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c
[...]
> 
> Acked-by: Declan Doherty <declan.doherty@intel.com>

NACK
I could argue it is too big for an unique patch,
but it's even worst when you ack without stripping the long patch.
My mouse is out of order after this long scroll looking for a comment.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 00/11] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-17 13:19     ` [dpdk-dev] [PATCH v5] crypto/scheduler: " Fan Zhang
  2017-01-17 14:09       ` Declan Doherty
@ 2017-01-24 16:06       ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
                           ` (11 more replies)
  1 sibling, 12 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

As the initial version, the scheduler PMD currently supports only the
Round-robin mode, which distributes the enqueued burst of crypto ops
among its slaves in a round-robin manner. This mode may help to fill
the throughput gap between the physical core and the existing cryptodevs
to increase the overall performance. Moreover, the scheduler PMD is
provided the APIs for user to create his/her own scheduler.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
- Scheduler PMD shares same EAL commandline options as other cryptodevs.
  However, apart from socket_id, the rest of cryptodev options are
  ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
  options are set as the minimum values of the attached slaves'. For
  example, a scheduler cryptodev is attached 2 cryptodevs with
  max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
  max_nb_queue_pairs will be automatically updated as 2.

- In addition, an extra option "slave" is added. The user can attach one
  or more slave cryptodevs initially by passing their names with this
  option. Here is an example:

  ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_
  mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,
  slave=aesni_mb_2" ...

  Remember the software cryptodevs to be attached shall be declared before
  the scheduler PMD, otherwise the scheduler will fail to locate the
  slave(s) and report error.

- The scheduler cryptodev cannot be started unless the scheduling mode
  is set and at least one slave is attached. Also, to configure the
  scheduler in the run-time, like attach/detach slave(s), change
  scheduling mode, or enable/disable crypto op ordering, one should stop
  the scheduler first, otherwise an error will be returned.

- Enabling crypto ops reordering will cause overwriting the userdata field
  of each mbuf.

Fan Zhang (11):

Changes in v6:
Split into multiple patches.
Added documentation.
Added unit test.

Changes in v5:
Fixed EOF whitespace warning.
Updated Copyright.

Changes in v4:
Fixed a few bugs.
Added slave EAL commandline option support.

Changes in v3:
Fixed config/common_base.

Changes in v2:
New approaches in API to suit future scheduling modes.

  cryptodev: add scheduler PMD name and type
  crypto/scheduler: add APIs for scheduler
  crypto/scheduler: add internal structure declarations
  crypto/scheduler: add scheduler API implementations
  crypto/scheduler: add round-robin scheduling mode
  crypto/scheduler: register scheduler vdev driver
  crypto/scheduler: register operation function pointer table
  crypto/scheduler: add scheduler PMD to DPDK compile system
  crypto/scheduler: add scheduler PMD config options
  app/test: add unit test for cryptodev scheduler PMD
  crypto/scheduler: add documentation

 app/test/test_cryptodev.c                          | 241 +++++++++-
 app/test/test_cryptodev_aes_test_vectors.h         | 101 +++--
 app/test/test_cryptodev_blockcipher.c              |   6 +-
 app/test/test_cryptodev_blockcipher.h              |   3 +-
 app/test/test_cryptodev_hash_test_vectors.h        |  38 +-
 config/common_base                                 |   8 +-
 doc/guides/cryptodevs/img/scheduler-overview.svg   | 277 ++++++++++++
 doc/guides/cryptodevs/index.rst                    |   3 +-
 doc/guides/cryptodevs/scheduler.rst                | 128 ++++++
 drivers/crypto/Makefile                            |   3 +-
 drivers/crypto/scheduler/Makefile                  |  66 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 471 ++++++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 165 +++++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 361 +++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 490 +++++++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 115 +++++
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 435 ++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   3 +
 mk/rte.app.mk                                      |   6 +-
 21 files changed, 2948 insertions(+), 55 deletions(-)
 create mode 100644 doc/guides/cryptodevs/img/scheduler-overview.svg
 create mode 100644 doc/guides/cryptodevs/scheduler.rst
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 01/11] cryptodev: add scheduler PMD name and type
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 02/11] crypto/scheduler: add APIs for scheduler Fan Zhang
                           ` (10 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

This patch adds the cryptodev scheduler PMD name and type identifier to
librte_cryptodev.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f284668..618f302 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -68,6 +68,8 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ARMV8_PMD	crypto_armv8
 /**< ARMv8 Crypto PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
+/**< Scheduler Crypto PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -80,6 +82,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 02/11] crypto/scheduler: add APIs for scheduler
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 03/11] crypto/scheduler: add internal structure declarations Fan Zhang
                           ` (9 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds APIs and function prototypes for the scheduler PMD to perform extra
operations other than standard cryptodev APIs.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 162 +++++++++++++++++++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++++++++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 ++
 3 files changed, 245 insertions(+)
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map

diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..b18fc48
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,162 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#include <rte_cryptodev_scheduler_operations.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum rte_cryptodev_scheduler_mode {
+	CDEV_SCHED_MODE_NOT_SET = 0,
+	CDEV_SCHED_MODE_USERDEFINED,
+
+	CDEV_SCHED_MODE_COUNT /* number of modes */
+};
+
+#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
+
+struct rte_cryptodev_scheduler;
+
+/**
+ * Load a user defined scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		scheduler	Pointer to the user defined scheduler
+ *
+ * @return
+ *	0 if loading successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler);
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Detach a attached crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be detached
+ *
+ * @return
+ *	0 if detaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
+
+/**
+ * Set the crypto ops reordering feature on/off
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		enable_reorder	set the crypto op reordering feature
+ *				0: disable reordering
+ *				1: enable reordering
+ *
+ * @return
+ *	0 if setting successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder);
+
+/**
+ * Get the current crypto ops reordering feature
+ *
+ * @param	dev_id		The target scheduler device ID
+ *
+ * @return
+ *	0 if reordering is disabled
+ *	1 if reordering is enabled
+ *	negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct rte_cryptodev_scheduler {
+	const char *name;
+	const char *description;
+	enum rte_cryptodev_scheduler_mode mode;
+
+	struct rte_cryptodev_scheduler_ops *ops;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
new file mode 100644
index 0000000..93cf123
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
@@ -0,0 +1,71 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+
+typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
+typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
+
+typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
+		struct rte_cryptodev *dev, uint16_t qp_id);
+
+typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
+		struct rte_cryptodev *dev);
+
+struct rte_cryptodev_scheduler_ops {
+	rte_cryptodev_scheduler_slave_attach_t slave_attach;
+	rte_cryptodev_scheduler_slave_attach_t slave_detach;
+
+	rte_cryptodev_scheduler_start_t scheduler_start;
+	rte_cryptodev_scheduler_stop_t scheduler_stop;
+
+	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
+
+	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..a485b43
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,12 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_load_user_scheduler;
+	rte_cryptodev_scheduler_slave_attach;
+	rte_cryptodev_scheduler_slave_detach;
+	rte_crpytodev_scheduler_mode_set;
+	rte_crpytodev_scheduler_mode_get;
+	rte_cryptodev_scheduler_ordering_set;
+	rte_cryptodev_scheduler_ordering_get;
+
+};
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 03/11] crypto/scheduler: add internal structure declarations
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 02/11] crypto/scheduler: add APIs for scheduler Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 04/11] crypto/scheduler: add scheduler API implementations Fan Zhang
                           ` (8 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds a number of internal structures for the cryptodev scheduler PMD. The
structures include the scheduler context, slave, queue pair context,
and session.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/scheduler_pmd_private.h | 115 +++++++++++++++++++++++
 1 file changed, 115 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h

diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..ac4690e
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+struct scheduler_slave {
+	uint8_t dev_id;
+	uint16_t qp_id;
+	uint32_t nb_inflight_cops;
+
+	enum rte_cryptodev_type dev_type;
+};
+
+struct scheduler_ctx {
+	void *private_ctx;
+	/**< private scheduler context pointer */
+
+	struct rte_cryptodev_capabilities *capabilities;
+	uint32_t nb_capabilities;
+
+	uint32_t max_nb_queue_pairs;
+
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	enum rte_cryptodev_scheduler_mode mode;
+
+	struct rte_cryptodev_scheduler_ops ops;
+
+	uint8_t reordering_enabled;
+
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+} __rte_cache_aligned;
+
+struct scheduler_qp_ctx {
+	void *private_qp_ctx;
+
+	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
+	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
+
+	struct rte_reorder_buffer *reorder_buf;
+	uint32_t seqn;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 04/11] crypto/scheduler: add scheduler API implementations
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (2 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 03/11] crypto/scheduler: add internal structure declarations Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 05/11] crypto/scheduler: add round-robin scheduling mode Fan Zhang
                           ` (7 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds the implementations of the APIs for scheduler cryptodev PMD.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 464 +++++++++++++++++++++
 1 file changed, 464 insertions(+)
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c

diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..ae6f032
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,464 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_malloc.h>
+
+#include "scheduler_pmd_private.h"
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static uint32_t
+sync_caps(struct rte_cryptodev_capabilities *caps,
+		uint32_t nb_caps,
+		const struct rte_cryptodev_capabilities *slave_caps)
+{
+	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
+	uint32_t i;
+
+	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_slave_caps++;
+
+	if (nb_caps == 0) {
+		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
+		return nb_slave_caps;
+	}
+
+	for (i = 0; i < sync_nb_caps; i++) {
+		struct rte_cryptodev_capabilities *cap = &caps[i];
+		uint32_t j;
+
+		for (j = 0; j < nb_slave_caps; j++) {
+			const struct rte_cryptodev_capabilities *s_cap =
+					&slave_caps[i];
+
+			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
+					cap->sym.xform_type)
+				continue;
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (s_cap->sym.auth.algo !=
+						cap->sym.auth.algo)
+					continue;
+
+				cap->sym.auth.digest_size.min =
+					s_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					s_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					s_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					s_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+			}
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (s_cap->sym.cipher.algo !=
+						cap->sym.cipher.algo)
+					continue;
+
+			/* no common cap found */
+			break;
+		}
+
+		if (j < nb_slave_caps)
+			continue;
+
+		/* remove a uncommon cap from the array */
+		for (j = i; j < sync_nb_caps - 1; j++)
+			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
+
+		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
+		sync_nb_caps--;
+	}
+
+	return sync_nb_caps;
+}
+
+static int
+update_scheduler_capability(struct scheduler_ctx *sched_ctx)
+{
+	struct rte_cryptodev_capabilities tmp_caps[256] = { {0} };
+	uint32_t nb_caps = 0, i;
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
+		if (nb_caps == 0)
+			return -1;
+	}
+
+	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
+			sizeof(struct rte_cryptodev_capabilities) *
+			(nb_caps + 1), 0, SOCKET_ID_ANY);
+	if (!sched_ctx->capabilities)
+		return -ENOMEM;
+
+	rte_memcpy(sched_ctx->capabilities, tmp_caps,
+			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
+
+	return 0;
+}
+
+static void
+update_scheduler_feature_flag(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	dev->feature_flags = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		dev->feature_flags |= dev_info.feature_flags;
+	}
+}
+
+static void
+update_max_nb_qp(struct scheduler_ctx *sched_ctx)
+{
+	uint32_t i;
+	uint32_t max_nb_qp;
+
+	if (!sched_ctx->nb_slaves)
+		return;
+
+	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
+				dev_info.max_nb_queue_pairs : max_nb_qp;
+	}
+
+	sched_ctx->max_nb_queue_pairs = max_nb_qp;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	struct scheduler_slave *slave;
+	struct rte_cryptodev_info dev_info;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("Too many slaves attached");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++)
+		if (sched_ctx->slaves[i].dev_id == slave_id) {
+			CS_LOG_ERR("Slave already added");
+			return -ENOTSUP;
+		}
+
+	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
+
+	rte_cryptodev_info_get(slave_id, &dev_info);
+
+	slave->dev_id = slave_id;
+	slave->dev_type = dev_info.dev_type;
+	sched_ctx->nb_slaves++;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		slave->dev_id = 0;
+		slave->dev_type = 0;
+		sched_ctx->nb_slaves--;
+
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i, slave_pos;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
+		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
+			break;
+	if (slave_pos == sched_ctx->nb_slaves) {
+		CS_LOG_ERR("Cannot find slave");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
+		CS_LOG_ERR("Failed to detach slave");
+		return -ENOTSUP;
+	}
+
+	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
+		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
+				sizeof(struct scheduler_slave));
+	}
+	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
+			sizeof(struct scheduler_slave));
+	sched_ctx->nb_slaves--;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (mode == sched_ctx->mode)
+		return 0;
+
+	switch (mode) {
+	default:
+		CS_LOG_ERR("Not yet supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->mode;
+}
+
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	sched_ctx->reordering_enabled = enable_reorder;
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return (int)sched_ctx->reordering_enabled;
+}
+
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler) {
+
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	strncpy(sched_ctx->name, scheduler->name,
+			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+	strncpy(sched_ctx->description, scheduler->description,
+			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+
+	/* load scheduler instance operations functions */
+	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
+	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
+	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
+	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
+	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
+	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->ops.create_private_ctx) {
+		int ret = (*sched_ctx->ops.create_private_ctx)(dev);
+
+		if (ret < 0) {
+			CS_LOG_ERR("Unable to create scheduler private "
+					"context");
+			return ret;
+		}
+	}
+
+	sched_ctx->mode = scheduler->mode;
+
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 05/11] crypto/scheduler: add round-robin scheduling mode
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (3 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 04/11] crypto/scheduler: add scheduler API implementations Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 06/11] crypto/scheduler: register scheduler vdev driver Fan Zhang
                           ` (6 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Implements round-robin scheduling mode and register into cryptodev
scheduler ops structure. This mode enqueues a burst of operation
to one of its slaves, and iterates the next burst to the other
slave. Same procedure is done on dequeueing operations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c |   7 +
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h |   3 +
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 435 +++++++++++++++++++++
 3 files changed, 445 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index ae6f032..e0ca029 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -329,6 +329,13 @@ rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
 		return 0;
 
 	switch (mode) {
+	case CDEV_SCHED_MODE_ROUNDROBIN:
+		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
+				roundrobin_scheduler) < 0) {
+			CS_LOG_ERR("Failed to load scheduler");
+			return -1;
+		}
+		break;
 	default:
 		CS_LOG_ERR("Not yet supported");
 		return -ENOTSUP;
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
index b18fc48..7ef44e7 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -46,6 +46,7 @@ extern "C" {
 enum rte_cryptodev_scheduler_mode {
 	CDEV_SCHED_MODE_NOT_SET = 0,
 	CDEV_SCHED_MODE_USERDEFINED,
+	CDEV_SCHED_MODE_ROUNDROBIN,
 
 	CDEV_SCHED_MODE_COUNT /* number of modes */
 };
@@ -156,6 +157,8 @@ struct rte_cryptodev_scheduler {
 	struct rte_cryptodev_scheduler_ops *ops;
 };
 
+extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
new file mode 100644
index 0000000..1f2e709
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -0,0 +1,435 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+
+#include "rte_cryptodev_scheduler_operations.h"
+#include "scheduler_pmd_private.h"
+
+struct rr_scheduler_qp_ctx {
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	uint32_t last_enq_slave_idx;
+	uint32_t last_deq_slave_idx;
+};
+
+static uint16_t
+schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct rte_cryptodev_sym_session *sessions[nb_ops];
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		sessions[i] = ops[i]->sym->session;
+		sessions[i + 1] = ops[i + 1]->sym->session;
+		sessions[i + 2] = ops[i + 2]->sym->session;
+		sessions[i + 3] = ops[i + 3]->sym->session;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	rr_qp_ctx->last_enq_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	/* recover session if enqueue is failed */
+	if (unlikely(processed_ops < nb_ops)) {
+		for (i = processed_ops; i < nb_ops; i++)
+			ops[i]->sym->session = sessions[i];
+	}
+
+	return processed_ops;
+}
+
+static uint16_t
+schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			gen_qp_ctx->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct rte_cryptodev_sym_session *sessions[nb_ops];
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++) {
+		rte_prefetch0(ops[i]->sym->session);
+		rte_prefetch0(ops[i]->sym->m_src);
+	}
+
+	for (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		sessions[i] = ops[i]->sym->session;
+		sessions[i + 1] = ops[i + 1]->sym->session;
+		sessions[i + 2] = ops[i + 2]->sym->session;
+		sessions[i + 3] = ops[i + 3]->sym->session;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 4]->sym->m_src);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->m_src);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->m_src);
+		rte_prefetch0(ops[i + 7]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	rr_qp_ctx->last_enq_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	/* recover session if enqueue is failed */
+	if (unlikely(processed_ops < nb_ops)) {
+		for (i = processed_ops; i < nb_ops; i++)
+			ops[i]->sym->session = sessions[i];
+	}
+
+	return processed_ops;
+}
+
+
+static uint16_t
+schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	struct scheduler_slave *slave;
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t nb_deq_ops;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	last_slave_idx += 1;
+	last_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+static uint16_t
+schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
+	struct scheduler_slave *slave;
+	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint16_t nb_deq_ops, nb_drained_mbufs;
+	const uint16_t nb_op_ops = nb_ops;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t i;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, op_ops, nb_ops);
+
+	rr_qp_ctx->last_deq_slave_idx += 1;
+	rr_qp_ctx->last_deq_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; (i < (nb_deq_ops - 8)) && (nb_deq_ops > 8); i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		mbuf0->userdata = op_ops[i];
+		mbuf1->userdata = op_ops[i + 1];
+		mbuf2->userdata = op_ops[i + 2];
+		mbuf3->userdata = op_ops[i + 3];
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf0->userdata = op_ops[i];
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
+			nb_ops);
+	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; (i < (nb_drained_mbufs - 8)) && (nb_drained_mbufs > 8);
+			i += 4) {
+		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->userdata;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->userdata;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->userdata;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->userdata;
+
+		reorder_mbufs[i]->userdata = NULL;
+		reorder_mbufs[i + 1]->userdata = NULL;
+		reorder_mbufs[i + 2]->userdata = NULL;
+		reorder_mbufs[i + 3]->userdata = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_drained_mbufs; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->userdata;
+		reorder_mbufs[i]->userdata = NULL;
+	}
+
+	return nb_drained_mbufs;
+}
+
+static int
+slave_attach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+slave_detach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+scheduler_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+		struct rr_scheduler_qp_ctx *rr_qp_ctx =
+				qp_ctx->private_qp_ctx;
+		uint32_t j;
+		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
+
+		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
+				sizeof(struct scheduler_slave));
+		for (j = 0; j < sched_ctx->nb_slaves; j++) {
+			rr_qp_ctx->slaves[j].dev_id =
+					sched_ctx->slaves[i].dev_id;
+			rr_qp_ctx->slaves[j].qp_id = qp_id;
+		}
+
+		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
+
+		rr_qp_ctx->last_enq_slave_idx = 0;
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+		if (sched_ctx->reordering_enabled) {
+			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
+			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
+		} else {
+			qp_ctx->schedule_enqueue = &schedule_enqueue;
+			qp_ctx->schedule_dequeue = &schedule_dequeue;
+		}
+	}
+
+	return 0;
+}
+
+static int
+scheduler_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+static int
+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+	struct rr_scheduler_qp_ctx *rr_qp_ctx;
+
+	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
+			rte_socket_id());
+	if (!rr_qp_ctx) {
+		CS_LOG_ERR("failed allocate memory for private queue pair");
+		return -ENOMEM;
+	}
+
+	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
+
+	return 0;
+}
+
+static int
+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+struct rte_cryptodev_scheduler_ops ops = {
+	slave_attach,
+	slave_detach,
+	scheduler_start,
+	scheduler_stop,
+	scheduler_config_qp,
+	scheduler_create_private_ctx
+};
+
+struct rte_cryptodev_scheduler scheduler = {
+		.name = "roundrobin-scheduler",
+		.description = "scheduler which will round robin burst across "
+				"slave crypto devices",
+		.mode = CDEV_SCHED_MODE_ROUNDROBIN,
+		.ops = &ops
+};
+
+struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 06/11] crypto/scheduler: register scheduler vdev driver
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (4 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 05/11] crypto/scheduler: add round-robin scheduling mode Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 07/11] crypto/scheduler: register operation function pointer table Fan Zhang
                           ` (5 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds crypto scheduler's PMD's probe and remove function and the device's
enqueue and dequeue burst functions. A cryptodev scheduler PMD is
then registered in the end.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/scheduler_pmd.c | 361 +++++++++++++++++++++++++++++++
 1 file changed, 361 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c

diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..62418d0
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,361 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+struct scheduler_init_params {
+	struct rte_crypto_vdev_init_params def_p;
+	uint32_t nb_slaves;
+	uint8_t slaves[MAX_SLAVES_NUM];
+};
+
+#define RTE_CRYPTODEV_VDEV_NAME				("name")
+#define RTE_CRYPTODEV_VDEV_SLAVE			("slave")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG	("max_nb_queue_pairs")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG	("max_nb_sessions")
+#define RTE_CRYPTODEV_VDEV_SOCKET_ID		("socket_id")
+
+const char *scheduler_valid_params[] = {
+	RTE_CRYPTODEV_VDEV_NAME,
+	RTE_CRYPTODEV_VDEV_SLAVE,
+	RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+	RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+	RTE_CRYPTODEV_VDEV_SOCKET_ID
+};
+
+static uint16_t
+scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint16_t
+scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static int
+attach_init_slaves(uint8_t scheduler_id,
+		const uint8_t *slaves, const uint8_t nb_slaves)
+{
+	uint8_t i;
+
+	for (i = 0; i < nb_slaves; i++) {
+		struct rte_cryptodev *dev =
+				rte_cryptodev_pmd_get_dev(slaves[i]);
+		int status = rte_cryptodev_scheduler_slave_attach(
+				scheduler_id, slaves[i]);
+
+		if (status < 0 || !dev) {
+			CS_LOG_ERR("Failed to attach slave cryptodev "
+					"%u.\n", slaves[i]);
+			return status;
+		}
+
+		RTE_LOG(INFO, PMD, "  Attached slave cryptodev %s\n",
+				dev->data->name);
+	}
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct scheduler_init_params *init_params)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (init_params->def_p.name[0] == '\0') {
+		int ret = rte_cryptodev_pmd_create_dev_name(
+				init_params->def_p.name,
+				RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+
+		if (ret < 0) {
+			CS_LOG_ERR("failed to create unique name");
+			return ret;
+		}
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_ctx),
+			init_params->def_p.socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->enqueue_burst = scheduler_enqueue_burst;
+	dev->dequeue_burst = scheduler_dequeue_burst;
+
+	sched_ctx = dev->data->dev_private;
+	sched_ctx->max_nb_queue_pairs =
+			init_params->def_p.max_nb_queue_pairs;
+
+	return attach_init_slaves(dev->data->dev_id, init_params->slaves,
+			init_params->nb_slaves);
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	sched_ctx = dev->data->dev_private;
+
+	if (sched_ctx->nb_slaves) {
+		uint32_t i;
+
+		for (i = 0; i < sched_ctx->nb_slaves; i++)
+			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
+					sched_ctx->slaves[i].dev_id);
+	}
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+static uint8_t
+number_of_sockets(void)
+{
+	int sockets = 0;
+	int i;
+	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
+
+	for (i = 0; ((i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL)); i++) {
+		if (sockets < ms[i].socket_id)
+			sockets = ms[i].socket_id;
+	}
+
+	/* Number of sockets = maximum socket_id + 1 */
+	return ++sockets;
+}
+
+/** Parse integer from integer argument */
+static int
+parse_integer_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	int *i = (int *) extra_args;
+
+	*i = atoi(value);
+	if (*i < 0) {
+		CS_LOG_ERR("Argument has to be positive.\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse name */
+static int
+parse_name_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct rte_crypto_vdev_init_params *params = extra_args;
+
+	if (strlen(value) >= RTE_CRYPTODEV_NAME_MAX_LEN - 1) {
+		CS_LOG_ERR("Invalid name %s, should be less than "
+				"%u bytes.\n", value,
+				RTE_CRYPTODEV_NAME_MAX_LEN - 1);
+		return -1;
+	}
+
+	strncpy(params->name, value, RTE_CRYPTODEV_NAME_MAX_LEN);
+
+	return 0;
+}
+
+/** Parse slave */
+static int
+parse_slave_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct scheduler_init_params *param = extra_args;
+	struct rte_cryptodev *dev =
+			rte_cryptodev_pmd_get_named_dev(value);
+
+	if (!dev) {
+		RTE_LOG(ERR, PMD, "Invalid slave name %s.\n", value);
+		return -1;
+	}
+
+	if (param->nb_slaves >= MAX_SLAVES_NUM - 1) {
+		CS_LOG_ERR("Too many slaves.\n");
+		return -1;
+	}
+
+	param->slaves[param->nb_slaves] = dev->data->dev_id;
+	param->nb_slaves++;
+
+	return 0;
+}
+
+static int
+scheduler_parse_init_params(struct scheduler_init_params *params,
+		const char *input_args)
+{
+	struct rte_kvargs *kvlist = NULL;
+	int ret = 0;
+
+	if (params == NULL)
+		return -EINVAL;
+
+	if (input_args) {
+		kvlist = rte_kvargs_parse(input_args,
+				scheduler_valid_params);
+		if (kvlist == NULL)
+			return -1;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_queue_pairs);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_sessions);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SOCKET_ID,
+				&parse_integer_arg,
+				&params->def_p.socket_id);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_NAME,
+				&parse_name_arg,
+				&params->def_p);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SLAVE,
+				&parse_slave_arg, params);
+		if (ret < 0)
+			goto free_kvlist;
+
+		if (params->def_p.socket_id >= number_of_sockets()) {
+			CDEV_LOG_ERR("Invalid socket id specified to create "
+				"the virtual crypto device on");
+			goto free_kvlist;
+		}
+	}
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct scheduler_init_params init_params = {
+		.def_p = {
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+			rte_socket_id(),
+			""
+		},
+		.nb_slaves = 0,
+		.slaves = {0}
+	};
+
+	scheduler_parse_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.def_p.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.def_p.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.def_p.max_nb_sessions);
+	if (init_params.def_p.name[0] != '\0')
+		RTE_LOG(INFO, PMD, "  User defined name = %s\n",
+			init_params.def_p.name);
+
+	return cryptodev_scheduler_create(name, &init_params);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int> "
+	"slave=<name>");
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 07/11] crypto/scheduler: register operation function pointer table
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (5 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 06/11] crypto/scheduler: register scheduler vdev driver Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system Fan Zhang
                           ` (4 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Implements all standard operations required for cryptodev,
and register them to cryptodev operation function pointer table.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/scheduler_pmd_ops.c | 490 +++++++++++++++++++++++++++
 1 file changed, 490 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c

diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..56624c7
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,490 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static int
+update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (sched_ctx->reordering_enabled) {
+		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (!buff_size)
+			return 0;
+
+		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+			dev->data->dev_id, qp_id) < 0) {
+			CS_LOG_ERR("failed to create unique reorder buffer "
+					"name");
+			return -ENOMEM;
+		}
+
+		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
+				rte_socket_id(), buff_size);
+		if (!qp_ctx->reorder_buf) {
+			CS_LOG_ERR("failed to create reorder buffer");
+			return -ENOMEM;
+		}
+	} else {
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+	}
+
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = update_reorder_buff(dev, i);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to update reorder buffer");
+			return ret;
+		}
+	}
+
+	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
+		CS_LOG_ERR("Scheduler mode is not set");
+		return -1;
+	}
+
+	if (!sched_ctx->nb_slaves) {
+		CS_LOG_ERR("No slave in the scheduler");
+		return -1;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
+			CS_LOG_ERR("Failed to attach slave");
+			return -ENOTSUP;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
+
+	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
+		CS_LOG_ERR("Scheduler start failed");
+		return -1;
+	}
+
+	/* start all slaves */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to start slave dev %u",
+					slave_dev_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	if (!dev->data->dev_started)
+		return;
+
+	/* stop all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->dev_stop)(slave_dev);
+	}
+
+	if (*sched_ctx->ops.scheduler_stop)
+		(*sched_ctx->ops.scheduler_stop)(dev);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if (*sched_ctx->ops.slave_detach)
+			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
+	}
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	/* the dev should be stopped before being closed */
+	if (dev->data->dev_started)
+		return -EBUSY;
+
+	/* close all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
+		if (ret < 0)
+			return ret;
+	}
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (qp_ctx->private_qp_ctx) {
+			rte_free(qp_ctx->private_qp_ctx);
+			qp_ctx->private_qp_ctx = NULL;
+		}
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+		struct rte_cryptodev_stats slave_stats = {0};
+
+		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
+
+		stats->enqueued_count += slave_stats.enqueued_count;
+		stats->dequeued_count += slave_stats.dequeued_count;
+
+		stats->enqueue_err_count += slave_stats.enqueue_err_count;
+		stats->dequeue_err_count += slave_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->stats_reset)(slave_dev);
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t max_nb_sessions = sched_ctx->nb_slaves ?
+			UINT32_MAX : RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS;
+	uint32_t i;
+
+	if (!dev_info)
+		return;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev_info slave_info;
+
+		rte_cryptodev_info_get(slave_dev_id, &slave_info);
+		max_nb_sessions = slave_info.sym.max_nb_sessions <
+				max_nb_sessions ?
+				slave_info.sym.max_nb_sessions :
+				max_nb_sessions;
+	}
+
+	dev_info->dev_type = dev->dev_type;
+	dev_info->feature_flags = dev->feature_flags;
+	dev_info->capabilities = sched_ctx->capabilities;
+	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
+	dev_info->sym.max_nb_sessions = max_nb_sessions;
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (!qp_ctx)
+		return 0;
+
+	if (qp_ctx->reorder_buf)
+		rte_reorder_free(qp_ctx->reorder_buf);
+	if (qp_ctx->private_qp_ctx)
+		rte_free(qp_ctx->private_qp_ctx);
+
+	rte_free(qp_ctx);
+	dev->data->queue_pairs[qp_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx;
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"CRYTO_SCHE PMD %u QP %u",
+			dev->data->dev_id, qp_id) < 0) {
+		CS_LOG_ERR("Failed to create unique queue pair name");
+		return -EFAULT;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
+			socket_id);
+	if (qp_ctx == NULL)
+		return -ENOMEM;
+
+	dev->data->queue_pairs[qp_id] = qp_ctx;
+
+	if (*sched_ctx->ops.config_queue_pair) {
+		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
+			CS_LOG_ERR("Unable to configure queue pair");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static uint32_t
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sess(struct scheduler_ctx *sched_ctx,
+		struct rte_crypto_sym_xform *xform,
+		struct scheduler_session *sess,
+		uint32_t create)
+{
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct scheduler_slave *slave = &sched_ctx->slaves[i];
+		struct rte_cryptodev *dev =
+				rte_cryptodev_pmd_get_dev(slave->dev_id);
+
+		if (sess->sessions[i]) {
+			if (create)
+				continue;
+			/* !create */
+			(*dev->dev_ops->session_clear)(dev,
+					(void *)sess->sessions[i]);
+			sess->sessions[i] = NULL;
+		} else {
+			if (!create)
+				continue;
+			/* create */
+			sess->sessions[i] =
+					rte_cryptodev_sym_session_create(
+							slave->dev_id, xform);
+			if (!sess->sessions[i]) {
+				config_slave_sess(sched_ctx, NULL, sess, 0);
+				return -1;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	config_slave_sess(sched_ctx, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (6 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 07/11] crypto/scheduler: register operation function pointer table Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 09/11] crypto/scheduler: add scheduler PMD config options Fan Zhang
                           ` (3 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds Makefile for scheduler cryptodev PMD, and updates existing
Makefiles. Different than other cryptodev PMDs, scheduler PMD
is required to be built as shared libraries.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/Makefile           |  3 +-
 drivers/crypto/scheduler/Makefile | 66 +++++++++++++++++++++++++++++++++++++++
 mk/rte.app.mk                     |  6 +++-
 3 files changed, 73 insertions(+), 2 deletions(-)
 create mode 100644 drivers/crypto/scheduler/Makefile

diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 77b02cf..a5a246b 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += armv8
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL) += openssl
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..0cce6f2
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,66 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_kvargs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index a5daa84..0d0a970 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -99,10 +98,15 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
 
+ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+endif
+
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 09/11] crypto/scheduler: add scheduler PMD config options
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (7 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 10/11] app/test: add unit test for cryptodev scheduler PMD Fan Zhang
                           ` (2 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds scheduler PMD enable and debug flags to config/common_base.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 config/common_base | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/config/common_base b/config/common_base
index b9fb8e2..cd4a0f3 100644
--- a/config/common_base
+++ b/config/common_base
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -434,6 +434,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC=n
 CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for NULL Crypto device
 #
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 10/11] app/test: add unit test for cryptodev scheduler PMD
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (8 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 09/11] crypto/scheduler: add scheduler PMD config options Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 11/11] crypto/scheduler: add documentation Fan Zhang
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Same as other cryptodev PMDs, it is necessary to carry out the unit
test for scheduler PMD. Currently the test is designed to attach 2
AESNI-MB cryptodev PMDs as slaves, sets the scheduling mode as round-
robin, and runs almost all AESNI-MB test items (except for sessionless
tests). In the end, the slaves are detached.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 app/test/test_cryptodev.c                   | 241 +++++++++++++++++++++++++++-
 app/test/test_cryptodev_aes_test_vectors.h  | 101 ++++++++----
 app/test/test_cryptodev_blockcipher.c       |   6 +-
 app/test/test_cryptodev_blockcipher.h       |   3 +-
 app/test/test_cryptodev_hash_test_vectors.h |  38 +++--
 5 files changed, 338 insertions(+), 51 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 0f0cf4d..bf44928 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2015-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -40,6 +40,11 @@
 #include <rte_cryptodev.h>
 #include <rte_cryptodev_pmd.h>
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+#include <rte_cryptodev_scheduler.h>
+#include <rte_cryptodev_scheduler_operations.h>
+#endif
+
 #include "test.h"
 #include "test_cryptodev.h"
 
@@ -159,7 +164,7 @@ testsuite_setup(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct rte_cryptodev_info info;
-	unsigned i, nb_devs, dev_id;
+	uint32_t i = 0, nb_devs, dev_id;
 	int ret;
 	uint16_t qp_id;
 
@@ -370,6 +375,29 @@ testsuite_setup(void)
 		}
 	}
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_SCHEDULER_PMD) {
+
+#ifndef RTE_LIBRTE_PMD_AESNI_MB
+		RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
+			" enabled in config file to run this testsuite.\n");
+		return TEST_FAILED;
+#endif
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_SCHEDULER_PMD);
+		if (nb_devs < 1) {
+			ret = rte_eal_vdev_init(
+				RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+				NULL);
+
+			TEST_ASSERT(ret == 0,
+				"Failed to create instance %u of"
+				" pmd : %s",
+				i, RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+		}
+	}
+#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
+
 #ifndef RTE_LIBRTE_PMD_QAT
 	if (gbl_cryptodev_type == RTE_CRYPTODEV_QAT_SYM_PMD) {
 		RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
@@ -1535,6 +1563,58 @@ test_AES_chain_mb_all(void)
 	return TEST_SUCCESS;
 }
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+
+static int
+test_AES_cipheronly_scheduler_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_SCHEDULER_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_chain_scheduler_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_SCHEDULER_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_authonly_scheduler_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_SCHEDULER_PMD,
+		BLKCIPHER_AUTHONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
+
 static int
 test_AES_chain_openssl_all(void)
 {
@@ -7292,6 +7372,150 @@ auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt(void)
 			&aes128cbc_hmac_sha1_test_vector);
 }
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+
+/* global AESNI slave IDs for the scheduler test */
+uint8_t aesni_ids[2];
+
+static int
+test_scheduler_attach_slave_op(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint8_t sched_id = ts_params->valid_devs[0];
+	uint32_t nb_devs, qp_id, i, nb_devs_attached = 0;
+	int ret;
+	struct rte_cryptodev_config config = {
+			.nb_queue_pairs = 8,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 256
+			}
+	};
+	struct rte_cryptodev_qp_conf qp_conf = {2048};
+
+	/* create 2 AESNI_MB if necessary */
+	nb_devs = rte_cryptodev_count_devtype(
+			RTE_CRYPTODEV_AESNI_MB_PMD);
+	if (nb_devs < 2) {
+		for (i = nb_devs; i < 2; i++) {
+			ret = rte_eal_vdev_init(
+				RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
+
+			TEST_ASSERT(ret == 0,
+				"Failed to create instance %u of"
+				" pmd : %s",
+				i, RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+		}
+	}
+
+	/* attach 2 AESNI_MB cdevs */
+	for (i = 0; i < rte_cryptodev_count() && nb_devs_attached < 2;
+			i++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type != RTE_CRYPTODEV_AESNI_MB_PMD)
+			continue;
+
+		ret = rte_cryptodev_configure(i, &config);
+		TEST_ASSERT(ret == 0,
+			"Failed to configure device %u of pmd : %s", i,
+			RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				i, qp_id, &qp_conf,
+				rte_cryptodev_socket_id(i)),
+				"Failed to setup queue pair %u on "
+				"cryptodev %u", qp_id, i);
+		}
+
+		ret = rte_cryptodev_scheduler_slave_attach(sched_id,
+				(uint8_t)i);
+
+		TEST_ASSERT(ret == 0,
+			"Failed to attach device %u of pmd : %s", i,
+			RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+		aesni_ids[nb_devs_attached] = (uint8_t)i;
+
+		nb_devs_attached++;
+	}
+
+	return 0;
+}
+
+static int
+test_scheduler_detach_slave_op(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint8_t sched_id = ts_params->valid_devs[0];
+	uint32_t i;
+	int ret;
+
+	for (i = 0; i < 2; i++) {
+		ret = rte_cryptodev_scheduler_slave_detach(sched_id,
+				aesni_ids[i]);
+		TEST_ASSERT(ret == 0,
+			"Failed to detach device %u", aesni_ids[i]);
+	}
+
+	return 0;
+}
+
+static int
+test_scheduler_mode_op(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint8_t sched_id = ts_params->valid_devs[0];
+	struct rte_cryptodev_scheduler_ops op = {0};
+	struct rte_cryptodev_scheduler dummy_scheduler = {
+		.description = "dummy scheduler to test mode",
+		.name = "dummy scheduler",
+		.mode = CDEV_SCHED_MODE_USERDEFINED,
+		.ops = &op
+	};
+	int ret;
+
+	/* set user defined mode */
+	ret = rte_cryptodev_scheduler_load_user_scheduler(sched_id,
+			&dummy_scheduler);
+	TEST_ASSERT(ret == 0,
+		"Failed to set cdev %u to user defined mode", sched_id);
+
+	/* set round robin mode */
+	ret = rte_crpytodev_scheduler_mode_set(sched_id,
+			CDEV_SCHED_MODE_ROUNDROBIN);
+	TEST_ASSERT(ret == 0,
+		"Failed to set cdev %u to round-robin mode", sched_id);
+	TEST_ASSERT(rte_crpytodev_scheduler_mode_get(sched_id) ==
+			CDEV_SCHED_MODE_ROUNDROBIN, "Scheduling Mode "
+					"not match");
+
+	return 0;
+}
+
+static struct unit_test_suite cryptodev_scheduler_testsuite  = {
+	.suite_name = "Crypto Device Scheduler Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, test_scheduler_attach_slave_op),
+		TEST_CASE_ST(NULL, NULL, test_scheduler_mode_op),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_scheduler_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_scheduler_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_authonly_scheduler_all),
+		TEST_CASE_ST(NULL, NULL, test_scheduler_detach_slave_op),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
+
 static struct unit_test_suite cryptodev_qat_testsuite  = {
 	.suite_name = "Crypto QAT Unit Test Suite",
 	.setup = testsuite_setup,
@@ -7973,6 +8197,19 @@ test_cryptodev_armv8(void)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+
+static int
+test_cryptodev_scheduler(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	return unit_test_suite_runner(&cryptodev_scheduler_testsuite);
+}
+
+REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
+
+#endif
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index f0f37ed..f3fbef1 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -1,7 +1,7 @@
 /*
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -924,7 +924,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CTR HMAC-SHA1 Decryption Digest "
@@ -933,21 +934,24 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR XCBC Encryption Digest",
 		.test_data = &aes_test_data_2,
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR XCBC Decryption Digest Verify",
 		.test_data = &aes_test_data_2,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
@@ -957,7 +961,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
 			BLOCKCIPHER_TEST_FEATURE_OOP,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest",
@@ -965,7 +970,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR HMAC-SHA1 Decryption Digest "
@@ -974,7 +980,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest",
@@ -983,7 +990,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1001,7 +1009,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 			BLOCKCIPHER_TEST_FEATURE_OOP,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -1011,7 +1020,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -1027,7 +1037,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA256 Encryption Digest "
@@ -1044,7 +1055,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA256 Decryption Digest "
@@ -1059,7 +1071,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
@@ -1088,7 +1101,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
@@ -1099,21 +1113,24 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 			BLOCKCIPHER_TEST_FEATURE_OOP,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC XCBC Encryption Digest",
 		.test_data = &aes_test_data_7,
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC XCBC Decryption Digest Verify",
 		.test_data = &aes_test_data_7,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1141,7 +1158,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA224 Decryption Digest "
@@ -1150,7 +1168,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA384 Encryption Digest",
@@ -1158,7 +1177,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA384 Decryption Digest "
@@ -1167,7 +1187,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1197,7 +1218,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC Decryption",
@@ -1205,7 +1227,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CBC Encryption",
@@ -1213,7 +1236,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CBC Encryption Scater gather",
@@ -1229,7 +1253,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CBC Encryption",
@@ -1237,7 +1262,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CBC Decryption",
@@ -1245,7 +1271,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CTR Encryption",
@@ -1253,7 +1280,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CTR Decryption",
@@ -1261,7 +1289,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR Encryption",
@@ -1269,7 +1298,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR Decryption",
@@ -1277,7 +1307,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR Encryption",
@@ -1285,7 +1316,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR Decryption",
@@ -1293,7 +1325,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 };
 
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index a48540c..da87368 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2015-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -106,6 +106,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 		digest_len = tdata->digest.len;
 		break;
 	case RTE_CRYPTODEV_AESNI_MB_PMD:
+	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		digest_len = tdata->digest.truncated_len;
 		break;
 	default:
@@ -649,6 +650,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_ARMV8_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8;
 		break;
+	case RTE_CRYPTODEV_SCHEDULER_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 91e9858..053aaa1 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -51,6 +51,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_QAT			0x0002 /* QAT flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
+#define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h
index a8f9da0..3214f9a 100644
--- a/app/test/test_cryptodev_hash_test_vectors.h
+++ b/app/test/test_cryptodev_hash_test_vectors.h
@@ -1,7 +1,7 @@
 /*
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -365,14 +365,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_md5_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-MD5 Digest Verify",
 		.test_data = &hmac_md5_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA1 Digest",
@@ -391,14 +393,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha1_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA1 Digest Verify",
 		.test_data = &hmac_sha1_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA224 Digest",
@@ -417,14 +421,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha224_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA224 Digest Verify",
 		.test_data = &hmac_sha224_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA256 Digest",
@@ -443,14 +449,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha256_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA256 Digest Verify",
 		.test_data = &hmac_sha256_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA384 Digest",
@@ -469,14 +477,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha384_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA384 Digest Verify",
 		.test_data = &hmac_sha384_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA512 Digest",
@@ -495,14 +505,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha512_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA512 Digest Verify",
 		.test_data = &hmac_sha512_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 };
 
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v6 11/11] crypto/scheduler: add documentation
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (9 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 10/11] app/test: add unit test for cryptodev scheduler PMD Fan Zhang
@ 2017-01-24 16:06         ` Fan Zhang
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds the description of the cryptodev scheduler PMD overview,
limitations, build, instructions, modes, etc.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 doc/guides/cryptodevs/img/scheduler-overview.svg | 277 +++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst                  |   3 +-
 doc/guides/cryptodevs/scheduler.rst              | 128 +++++++++++
 3 files changed, 407 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/cryptodevs/img/scheduler-overview.svg
 create mode 100644 doc/guides/cryptodevs/scheduler.rst

diff --git a/doc/guides/cryptodevs/img/scheduler-overview.svg b/doc/guides/cryptodevs/img/scheduler-overview.svg
new file mode 100644
index 0000000..82bb775
--- /dev/null
+++ b/doc/guides/cryptodevs/img/scheduler-overview.svg
@@ -0,0 +1,277 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export scheduler-fan.svg Page-1 -->
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+		xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="6.81229in" height="3.40992in"
+		viewBox="0 0 490.485 245.514" xml:space="preserve" color-interpolation-filters="sRGB" class="st10">
+	<v:documentProperties v:langID="1033" v:metric="true" v:viewMarkup="false"/>
+
+	<style type="text/css">
+	<![CDATA[
+		.st1 {visibility:visible}
+		.st2 {fill:#fec000;fill-opacity:0.25;filter:url(#filter_2);stroke:#fec000;stroke-opacity:0.25}
+		.st3 {fill:#cc3399;stroke:#ff8c00;stroke-width:3}
+		.st4 {fill:#ffffff;font-family:Calibri;font-size:1.33333em}
+		.st5 {fill:#ff9900;stroke:#ff8c00;stroke-width:3}
+		.st6 {fill:#ffffff;font-family:Calibri;font-size:1.33333em;font-weight:bold}
+		.st7 {fill:#ffc000;stroke:#ffffff;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.5}
+		.st8 {marker-end:url(#mrkr4-40);stroke:#ff0000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5}
+		.st9 {fill:#ff0000;fill-opacity:1;stroke:#ff0000;stroke-opacity:1;stroke-width:0.37313432835821}
+		.st10 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+	]]>
+	</style>
+
+	<defs id="Markers">
+		<g id="lend4">
+			<path d="M 2 1 L 0 0 L 2 -1 L 2 1 " style="stroke:none"/>
+		</g>
+		<marker id="mrkr4-40" class="st9" v:arrowType="4" v:arrowSize="2" v:setback="5.36" refX="-5.36" orient="auto"
+				markerUnits="strokeWidth" overflow="visible">
+			<use xlink:href="#lend4" transform="scale(-2.68,-2.68) "/>
+		</marker>
+	</defs>
+	<defs id="Filters">
+		<filter id="filter_2">
+			<feGaussianBlur stdDeviation="2"/>
+		</filter>
+	</defs>
+	<g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+		<title>Page-1</title>
+		<v:pageProperties v:drawingScale="0.0393701" v:pageScale="0.0393701" v:drawingUnits="24" v:shadowOffsetX="8.50394"
+				v:shadowOffsetY="-8.50394"/>
+		<v:layer v:name="Connector" v:index="0"/>
+		<g id="shape31-1" v:mID="31" v:groupContext="shape" transform="translate(4.15435,-179.702)">
+			<title>Rounded Rectangle.55</title>
+			<desc>User Application</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="240.743" cy="214.108" width="481.49" height="62.8119"/>
+			<g id="shadow31-2" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 193.75 A11.0507 11.0507 -180
+							 0 0 470.43 182.7 L11.05 182.7 A11.0507 11.0507 -180 0 0 -0 193.75 L0 234.46 A11.0507 11.0507 -180 0
+							 0 11.05 245.51 Z" class="st2"/>
+			</g>
+			<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 193.75 A11.0507 11.0507 -180 0
+						 0 470.43 182.7 L11.05 182.7 A11.0507 11.0507 -180 0 0 -0 193.75 L0 234.46 A11.0507 11.0507 -180 0 0 11.05
+						 245.51 Z" class="st3"/>
+			<text x="187.04" y="218.91" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>User Application</text>		</g>
+		<g id="shape135-7" v:mID="135" v:groupContext="shape" transform="translate(4.15435,-6.4728)">
+			<title>Rounded Rectangle.135</title>
+			<desc>Cryptodev</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="72.0307" cy="230.549" width="144.07" height="29.9308"/>
+			<g id="shadow135-8" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180
+							 0 0 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0
+							 0 3.31 245.51 Z" class="st2"/>
+			</g>
+			<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180 0 0
+						 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0 0 3.31 245.51
+						 Z" class="st5"/>
+			<text x="38.46" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev</text>		</g>
+		<g id="shape136-13" v:mID="136" v:groupContext="shape" transform="translate(172.866,-6.4728)">
+			<title>Rounded Rectangle.136</title>
+			<desc>Cryptodev</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="72.0307" cy="230.549" width="144.07" height="29.9308"/>
+			<g id="shadow136-14" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180
+							 0 0 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0
+							 0 3.31 245.51 Z" class="st2"/>
+			</g>
+			<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180 0 0
+						 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0 0 3.31 245.51
+						 Z" class="st5"/>
+			<text x="38.46" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev</text>		</g>
+		<g id="shape137-19" v:mID="137" v:groupContext="shape" transform="translate(341.578,-6.4728)">
+			<title>Rounded Rectangle.137</title>
+			<desc>Cryptodev</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="72.0307" cy="230.549" width="144.07" height="29.9308"/>
+			<g id="shadow137-20" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180
+							 0 0 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0
+							 0 3.31 245.51 Z" class="st2"/>
+			</g>
+			<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180 0 0
+						 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0 0 3.31 245.51
+						 Z" class="st5"/>
+			<text x="38.46" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev</text>		</g>
+		<g id="group139-25" transform="translate(4.15435,-66.8734)" v:mID="139" v:groupContext="group">
+			<title>Sheet.139</title>
+			<g id="shape33-26" v:mID="33" v:groupContext="shape">
+				<title>Rounded Rectangle.40</title>
+				<desc>Cryptodev Scheduler</desc>
+				<v:userDefs>
+					<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+				</v:userDefs>
+				<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197" v:verticalAlign="0"/>
+				<v:textRect cx="240.743" cy="204.056" width="481.49" height="82.916"/>
+				<g id="shadow33-27" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+						transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+					<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 173.65 A11.0507 11.0507
+								 -180 0 0 470.43 162.6 L11.05 162.6 A11.0507 11.0507 -180 0 0 0 173.65 L0 234.46 A11.0507 11.0507
+								 -180 0 0 11.05 245.51 Z" class="st2"/>
+				</g>
+				<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 173.65 A11.0507 11.0507 -180
+							 0 0 470.43 162.6 L11.05 162.6 A11.0507 11.0507 -180 0 0 0 173.65 L0 234.46 A11.0507 11.0507 -180 0 0
+							 11.05 245.51 Z" class="st5"/>
+				<text x="171.72" y="181" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev Scheduler</text>			</g>
+			<g id="shape138-32" v:mID="138" v:groupContext="shape" transform="translate(24.6009,-12.5889)">
+				<title>Rounded Rectangle.138</title>
+				<desc>Crypto Op Distribution Mechanism</desc>
+				<v:userDefs>
+					<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+				</v:userDefs>
+				<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+				<v:textRect cx="216.142" cy="230.549" width="432.29" height="29.9308"/>
+				<path d="M9.92 245.51 L422.36 245.51 A9.92145 9.92145 -180 0 0 432.28 235.59 L432.28 225.51 A9.92145 9.92145 -180
+							 0 0 422.36 215.58 L9.92 215.58 A9.92145 9.92145 -180 0 0 0 225.51 L0 235.59 A9.92145 9.92145 -180 0
+							 0 9.92 245.51 Z" class="st7"/>
+				<text x="103.11" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Crypto Op Distribution Mechanism</text>			</g>
+		</g>
+		<g id="shape140-35" v:mID="140" v:groupContext="shape" v:layerMember="0" transform="translate(234.378,-149.789)">
+			<title>Dynamic connector.229</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape141-41" v:mID="141" v:groupContext="shape" v:layerMember="0" transform="translate(248.551,-179.702)">
+			<title>Dynamic connector.141</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+		<g id="shape142-46" v:mID="142" v:groupContext="shape" v:layerMember="0" transform="translate(71.3856,-35.6203)">
+			<title>Dynamic connector.142</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape143-51" v:mID="143" v:groupContext="shape" v:layerMember="0" transform="translate(85.5588,-65.5333)">
+			<title>Dynamic connector.143</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+		<g id="shape144-56" v:mID="144" v:groupContext="shape" v:layerMember="0" transform="translate(234.378,-35.6203)">
+			<title>Dynamic connector.144</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape145-61" v:mID="145" v:groupContext="shape" v:layerMember="0" transform="translate(248.551,-65.5333)">
+			<title>Dynamic connector.145</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+		<g id="shape146-66" v:mID="146" v:groupContext="shape" v:layerMember="0" transform="translate(397.37,-34.837)">
+			<title>Dynamic connector.146</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape147-71" v:mID="147" v:groupContext="shape" v:layerMember="0" transform="translate(411.543,-64.75)">
+			<title>Dynamic connector.147</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+	</g>
+</svg>
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 06c3f6e..0b50600 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -1,5 +1,5 @@
 ..  BSD LICENSE
-    Copyright(c) 2015 - 2016 Intel Corporation. All rights reserved.
+    Copyright(c) 2015 - 2017 Intel Corporation. All rights reserved.
 
     Redistribution and use in source and binary forms, with or without
     modification, are permitted provided that the following conditions
@@ -42,6 +42,7 @@ Crypto Device Drivers
     kasumi
     openssl
     null
+    scheduler
     snow3g
     qat
     zuc
diff --git a/doc/guides/cryptodevs/scheduler.rst b/doc/guides/cryptodevs/scheduler.rst
new file mode 100644
index 0000000..70fb62e
--- /dev/null
+++ b/doc/guides/cryptodevs/scheduler.rst
@@ -0,0 +1,128 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Intel Corporation. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Cryptodev Scheduler Poll Mode Driver Library
+============================================
+
+Scheduler PMD is a software crypto PMD, which has the capabilities of
+attaching hardware and/or software cryptodevs, and distributes ingress
+crypto ops among them in a certain manner.
+
+.. figure:: img/scheduler-overview.*
+
+   Cryptodev Scheduler Overview
+
+
+The Cryptodev Scheduler PMD library (**librte_pmd_crypto_scheduler**) acts as
+a software crypto PMD and shares the same API provided by librte_cryptodev.
+The PMD supports attaching multiple crypto PMDs, software or hardware, as
+slaves, and distributes the crypto workload to them with certain behavior.
+The behaviors are categorizes as different "modes". Basically, a scheduling
+mode defines certain actions for scheduling crypto ops to its slaves.
+
+The librte_pmd_crypto_scheduler library exports a C API which provides an API
+for attaching/detaching slaves, set/get scheduling modes, and enable/disable
+crypto ops reordering.
+
+Limitations
+-----------
+
+* Sessionless crypto operation is not supported
+* OOP crypto operation is not supported when the crypto op reordering feature
+  is enabled.
+
+
+Installation
+------------
+
+To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base, and
+recompile DPDK
+
+
+Initialization
+--------------
+
+To use the PMD in an application, user must:
+
+* Call rte_eal_vdev_init("crpyto_scheduler") within the application.
+
+* Use --vdev="crpyto_scheduler" in the EAL options, which will call
+  rte_eal_vdev_init() internally.
+
+
+The following parameters (all optional) can be provided in the previous
+two calls:
+
+* socket_id: Specify the socket where the memory for the device is going
+  to be allocated (by default, socket_id will be the socket where the core
+  that is creating the PMD is running on).
+
+* max_nb_sessions: Specify the maximum number of sessions that can be
+  created. This value may be overwritten internally if there are too
+  many devices are attached.
+
+* slave: If a cryptodev has been initialized with specific name, it can be
+  attached to the scheduler using this parameter, simply filling the name
+  here. Multiple cryptodevs can be attached initially by presenting this
+  parameter multiple times.
+
+Example:
+
+.. code-block:: console
+
+    ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,slave=aesni_mb_2" ...
+
+.. note::
+
+    * The scheduler cryptodev cannot be started unless the scheduling mode
+      is set and at least one slave is attached. Also, to configure the
+      scheduler in the run-time, like attach/detach slave(s), change
+      scheduling mode, or enable/disable crypto op ordering, one should stop
+      the scheduler first, otherwise an error will be returned.
+
+    * The crypto op reordering feature requires using the userdata field of
+      every mbuf to be processed to store temporary data. By the end of
+      processing, the field is set to pointing to NULL, any previously
+      stored value of this field will be lost.
+
+
+Cryptodev Scheduler Modes Overview
+----------------------------------
+
+Currently the Crypto Scheduler PMD library supports following modes of
+operation:
+
+*   **CDEV_SCHED_MODE_ROUNDROBIN:**
+
+   Round-robin mode, which distributes the enqueued burst of crypto ops
+   among its slaves in a round-robin manner. This mode may help to fill
+   the throughput gap between the physical core and the existing cryptodevs
+   to increase the overall performance.
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
                           ` (10 preceding siblings ...)
  2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 11/11] crypto/scheduler: add documentation Fan Zhang
@ 2017-01-24 16:23         ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
                             ` (11 more replies)
  11 siblings, 12 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

This patch provides the initial implementation of the scheduler poll mode
driver using DPDK cryptodev framework.

Scheduler PMD is used to schedule and enqueue the crypto ops to the
hardware and/or software crypto devices attached to it (slaves). The
dequeue operation from the slave(s), and the possible dequeued crypto op
reordering, are then carried out by the scheduler.

As the initial version, the scheduler PMD currently supports only the
Round-robin mode, which distributes the enqueued burst of crypto ops
among its slaves in a round-robin manner. This mode may help to fill
the throughput gap between the physical core and the existing cryptodevs
to increase the overall performance. Moreover, the scheduler PMD is
provided the APIs for user to create his/her own scheduler.

Build instructions:
To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base

Notice:
- Scheduler PMD shares same EAL commandline options as other cryptodevs.
  However, apart from socket_id, the rest of cryptodev options are
  ignored. The scheduler PMD's max_nb_queue_pairs and max_nb_sessions
  options are set as the minimum values of the attached slaves'. For
  example, a scheduler cryptodev is attached 2 cryptodevs with
  max_nb_queue_pairs of 2 and 8, respectively. The scheduler cryptodev's
  max_nb_queue_pairs will be automatically updated as 2.

- In addition, an extra option "slave" is added. The user can attach one
  or more slave cryptodevs initially by passing their names with this
  option. Here is an example:

  ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_
  mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,
  slave=aesni_mb_2" ...

  Remember the software cryptodevs to be attached shall be declared before
  the scheduler PMD, otherwise the scheduler will fail to locate the
  slave(s) and report error.

- The scheduler cryptodev cannot be started unless the scheduling mode
  is set and at least one slave is attached. Also, to configure the
  scheduler in the run-time, like attach/detach slave(s), change
  scheduling mode, or enable/disable crypto op ordering, one should stop
  the scheduler first, otherwise an error will be returned.

- Enabling crypto ops reordering will cause overwriting the userdata field
  of each mbuf.

Fan Zhang (11):

Changes in v7:
Added missed sign-off

Changes in v6:
Split into multiple patches.
Added documentation.
Added unit test.

Changes in v5:
Fixed EOF whitespace warning.
Updated Copyright.

Changes in v4:
Fixed a few bugs.
Added slave EAL commandline option support.

Changes in v3:
Fixed config/common_base.

Changes in v2:
New approaches in API to suit future scheduling modes.

Fan Zhang (11):
  cryptodev: add scheduler PMD name and type
  crypto/scheduler: add APIs for scheduler
  crypto/scheduler: add internal structure declarations
  crypto/scheduler: add scheduler API implementations
  crypto/scheduler: add round-robin scheduling mode
  crypto/scheduler: register scheduler vdev driver
  crypto/scheduler: register operation function pointer table
  crypto/scheduler: add scheduler PMD to DPDK compile system
  crypto/scheduler: add scheduler PMD config options
  app/test: add unit test for cryptodev scheduler PMD
  crypto/scheduler: add documentation

 app/test/test_cryptodev.c                          | 241 +++++++++-
 app/test/test_cryptodev_aes_test_vectors.h         | 101 +++--
 app/test/test_cryptodev_blockcipher.c              |   6 +-
 app/test/test_cryptodev_blockcipher.h              |   3 +-
 app/test/test_cryptodev_hash_test_vectors.h        |  38 +-
 config/common_base                                 |   8 +-
 doc/guides/cryptodevs/img/scheduler-overview.svg   | 277 ++++++++++++
 doc/guides/cryptodevs/index.rst                    |   3 +-
 doc/guides/cryptodevs/scheduler.rst                | 128 ++++++
 drivers/crypto/Makefile                            |   3 +-
 drivers/crypto/scheduler/Makefile                  |  66 +++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 471 ++++++++++++++++++++
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 165 +++++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 +
 drivers/crypto/scheduler/scheduler_pmd.c           | 361 +++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_ops.c       | 490 +++++++++++++++++++++
 drivers/crypto/scheduler/scheduler_pmd_private.h   | 115 +++++
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 435 ++++++++++++++++++
 lib/librte_cryptodev/rte_cryptodev.h               |   3 +
 mk/rte.app.mk                                      |   6 +-
 21 files changed, 2948 insertions(+), 55 deletions(-)
 create mode 100644 doc/guides/cryptodevs/img/scheduler-overview.svg
 create mode 100644 doc/guides/cryptodevs/scheduler.rst
 create mode 100644 drivers/crypto/scheduler/Makefile
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 01/11] cryptodev: add scheduler PMD name and type
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 02/11] crypto/scheduler: add APIs for scheduler Fan Zhang
                             ` (10 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

This patch adds the cryptodev scheduler PMD name and type identifier to
librte_cryptodev.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index f284668..618f302 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -68,6 +68,8 @@ extern "C" {
 /**< KASUMI PMD device name */
 #define CRYPTODEV_NAME_ARMV8_PMD	crypto_armv8
 /**< ARMv8 Crypto PMD device name */
+#define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
+/**< Scheduler Crypto PMD device name */
 
 /** Crypto device type */
 enum rte_cryptodev_type {
@@ -80,6 +82,7 @@ enum rte_cryptodev_type {
 	RTE_CRYPTODEV_ZUC_PMD,		/**< ZUC PMD */
 	RTE_CRYPTODEV_OPENSSL_PMD,    /**<  OpenSSL PMD */
 	RTE_CRYPTODEV_ARMV8_PMD,	/**< ARMv8 crypto PMD */
+	RTE_CRYPTODEV_SCHEDULER_PMD,	/**< Crypto Scheduler PMD */
 };
 
 extern const char **rte_cyptodev_names;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 02/11] crypto/scheduler: add APIs for scheduler
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 03/11] crypto/scheduler: add internal structure declarations Fan Zhang
                             ` (9 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds APIs and function prototypes for the scheduler PMD to perform extra
operations other than standard cryptodev APIs.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h | 162 +++++++++++++++++++++
 .../scheduler/rte_cryptodev_scheduler_operations.h |  71 +++++++++
 .../scheduler/rte_pmd_crypto_scheduler_version.map |  12 ++
 3 files changed, 245 insertions(+)
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.h
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
 create mode 100644 drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map

diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
new file mode 100644
index 0000000..b18fc48
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -0,0 +1,162 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_H
+#define _RTE_CRYPTO_SCHEDULER_H
+
+#include <rte_cryptodev_scheduler_operations.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Crypto scheduler PMD operation modes
+ */
+enum rte_cryptodev_scheduler_mode {
+	CDEV_SCHED_MODE_NOT_SET = 0,
+	CDEV_SCHED_MODE_USERDEFINED,
+
+	CDEV_SCHED_MODE_COUNT /* number of modes */
+};
+
+#define RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN	(64)
+#define RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN	(256)
+
+struct rte_cryptodev_scheduler;
+
+/**
+ * Load a user defined scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		scheduler	Pointer to the user defined scheduler
+ *
+ * @return
+ *	0 if loading successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler);
+
+/**
+ * Attach a pre-configured crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be attached
+ *
+ * @return
+ *	0 if attaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Detach a attached crypto device to the scheduler
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		slave_id	crypto device ID to be detached
+ *
+ * @return
+ *	0 if detaching successful, negative int if otherwise.
+ */
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id);
+
+/**
+ * Set the scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		The scheduling mode
+ *
+ * @return
+ *	0 if attaching successful, negative integer if otherwise.
+ */
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode);
+
+/**
+ * Get the current scheduling mode
+ *
+ * @param	scheduler_id	The target scheduler device ID
+ *		mode		Pointer to write the scheduling mode
+ */
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id);
+
+/**
+ * Set the crypto ops reordering feature on/off
+ *
+ * @param	dev_id		The target scheduler device ID
+ *		enable_reorder	set the crypto op reordering feature
+ *				0: disable reordering
+ *				1: enable reordering
+ *
+ * @return
+ *	0 if setting successful, negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder);
+
+/**
+ * Get the current crypto ops reordering feature
+ *
+ * @param	dev_id		The target scheduler device ID
+ *
+ * @return
+ *	0 if reordering is disabled
+ *	1 if reordering is enabled
+ *	negative integer if otherwise.
+ */
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_enqueue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+typedef uint16_t (*rte_cryptodev_scheduler_burst_dequeue_t)(void *qp_ctx,
+		struct rte_crypto_op **ops, uint16_t nb_ops);
+
+struct rte_cryptodev_scheduler {
+	const char *name;
+	const char *description;
+	enum rte_cryptodev_scheduler_mode mode;
+
+	struct rte_cryptodev_scheduler_ops *ops;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_H */
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
new file mode 100644
index 0000000..93cf123
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
@@ -0,0 +1,71 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+#define _RTE_CRYPTO_SCHEDULER_OPERATIONS_H
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
+		struct rte_cryptodev *dev, uint8_t slave_id);
+
+typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev);
+typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev);
+
+typedef int (*rte_cryptodev_scheduler_config_queue_pair)(
+		struct rte_cryptodev *dev, uint16_t qp_id);
+
+typedef int (*rte_cryptodev_scheduler_create_private_ctx)(
+		struct rte_cryptodev *dev);
+
+struct rte_cryptodev_scheduler_ops {
+	rte_cryptodev_scheduler_slave_attach_t slave_attach;
+	rte_cryptodev_scheduler_slave_attach_t slave_detach;
+
+	rte_cryptodev_scheduler_start_t scheduler_start;
+	rte_cryptodev_scheduler_stop_t scheduler_stop;
+
+	rte_cryptodev_scheduler_config_queue_pair config_queue_pair;
+
+	rte_cryptodev_scheduler_create_private_ctx create_private_ctx;
+};
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_CRYPTO_SCHEDULER_OPERATIONS_H */
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
new file mode 100644
index 0000000..a485b43
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -0,0 +1,12 @@
+DPDK_17.02 {
+	global:
+
+	rte_cryptodev_scheduler_load_user_scheduler;
+	rte_cryptodev_scheduler_slave_attach;
+	rte_cryptodev_scheduler_slave_detach;
+	rte_crpytodev_scheduler_mode_set;
+	rte_crpytodev_scheduler_mode_get;
+	rte_cryptodev_scheduler_ordering_set;
+	rte_cryptodev_scheduler_ordering_get;
+
+};
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 03/11] crypto/scheduler: add internal structure declarations
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 02/11] crypto/scheduler: add APIs for scheduler Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 04/11] crypto/scheduler: add scheduler API implementations Fan Zhang
                             ` (8 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds a number of internal structures for the cryptodev scheduler PMD. The
structures include the scheduler context, slave, queue pair context,
and session.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 drivers/crypto/scheduler/scheduler_pmd_private.h | 115 +++++++++++++++++++++++
 1 file changed, 115 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_private.h

diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
new file mode 100644
index 0000000..ac4690e
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -0,0 +1,115 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _SCHEDULER_PMD_PRIVATE_H
+#define _SCHEDULER_PMD_PRIVATE_H
+
+#include <rte_hash.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+/**< Maximum number of bonded devices per devices */
+#ifndef MAX_SLAVES_NUM
+#define MAX_SLAVES_NUM				(8)
+#endif
+
+#define PER_SLAVE_BUFF_SIZE			(256)
+
+#define CS_LOG_ERR(fmt, args...)					\
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",		\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_CRYPTO_SCHEDULER_DEBUG
+#define CS_LOG_INFO(fmt, args...)					\
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+
+#define CS_LOG_DBG(fmt, args...)					\
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",	\
+		RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),			\
+		__func__, __LINE__, ## args)
+#else
+#define CS_LOG_INFO(fmt, args...)
+#define CS_LOG_DBG(fmt, args...)
+#endif
+
+struct scheduler_slave {
+	uint8_t dev_id;
+	uint16_t qp_id;
+	uint32_t nb_inflight_cops;
+
+	enum rte_cryptodev_type dev_type;
+};
+
+struct scheduler_ctx {
+	void *private_ctx;
+	/**< private scheduler context pointer */
+
+	struct rte_cryptodev_capabilities *capabilities;
+	uint32_t nb_capabilities;
+
+	uint32_t max_nb_queue_pairs;
+
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	enum rte_cryptodev_scheduler_mode mode;
+
+	struct rte_cryptodev_scheduler_ops ops;
+
+	uint8_t reordering_enabled;
+
+	char name[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
+	char description[RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN];
+} __rte_cache_aligned;
+
+struct scheduler_qp_ctx {
+	void *private_qp_ctx;
+
+	rte_cryptodev_scheduler_burst_enqueue_t schedule_enqueue;
+	rte_cryptodev_scheduler_burst_dequeue_t schedule_dequeue;
+
+	struct rte_reorder_buffer *reorder_buf;
+	uint32_t seqn;
+} __rte_cache_aligned;
+
+struct scheduler_session {
+	struct rte_cryptodev_sym_session *sessions[MAX_SLAVES_NUM];
+};
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
+
+#endif /* _SCHEDULER_PMD_PRIVATE_H */
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 04/11] crypto/scheduler: add scheduler API implementations
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (2 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 03/11] crypto/scheduler: add internal structure declarations Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 05/11] crypto/scheduler: add round-robin scheduling mode Fan Zhang
                             ` (7 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds the implementations of the APIs for scheduler cryptodev PMD.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 464 +++++++++++++++++++++
 1 file changed, 464 insertions(+)
 create mode 100644 drivers/crypto/scheduler/rte_cryptodev_scheduler.c

diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
new file mode 100644
index 0000000..14f0983
--- /dev/null
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -0,0 +1,464 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_reorder.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_cryptodev_scheduler.h>
+#include <rte_malloc.h>
+
+#include "scheduler_pmd_private.h"
+
+/** update the scheduler pmd's capability with attaching device's
+ *  capability.
+ *  For each device to be attached, the scheduler's capability should be
+ *  the common capability set of all slaves
+ **/
+static uint32_t
+sync_caps(struct rte_cryptodev_capabilities *caps,
+		uint32_t nb_caps,
+		const struct rte_cryptodev_capabilities *slave_caps)
+{
+	uint32_t sync_nb_caps = nb_caps, nb_slave_caps = 0;
+	uint32_t i;
+
+	while (slave_caps[nb_slave_caps].op != RTE_CRYPTO_OP_TYPE_UNDEFINED)
+		nb_slave_caps++;
+
+	if (nb_caps == 0) {
+		rte_memcpy(caps, slave_caps, sizeof(*caps) * nb_slave_caps);
+		return nb_slave_caps;
+	}
+
+	for (i = 0; i < sync_nb_caps; i++) {
+		struct rte_cryptodev_capabilities *cap = &caps[i];
+		uint32_t j;
+
+		for (j = 0; j < nb_slave_caps; j++) {
+			const struct rte_cryptodev_capabilities *s_cap =
+					&slave_caps[i];
+
+			if (s_cap->op != cap->op || s_cap->sym.xform_type !=
+					cap->sym.xform_type)
+				continue;
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_AUTH) {
+				if (s_cap->sym.auth.algo !=
+						cap->sym.auth.algo)
+					continue;
+
+				cap->sym.auth.digest_size.min =
+					s_cap->sym.auth.digest_size.min <
+					cap->sym.auth.digest_size.min ?
+					s_cap->sym.auth.digest_size.min :
+					cap->sym.auth.digest_size.min;
+				cap->sym.auth.digest_size.max =
+					s_cap->sym.auth.digest_size.max <
+					cap->sym.auth.digest_size.max ?
+					s_cap->sym.auth.digest_size.max :
+					cap->sym.auth.digest_size.max;
+
+			}
+
+			if (s_cap->sym.xform_type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				if (s_cap->sym.cipher.algo !=
+						cap->sym.cipher.algo)
+					continue;
+
+			/* no common cap found */
+			break;
+		}
+
+		if (j < nb_slave_caps)
+			continue;
+
+		/* remove a uncommon cap from the array */
+		for (j = i; j < sync_nb_caps - 1; j++)
+			rte_memcpy(&caps[j], &caps[j+1], sizeof(*cap));
+
+		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
+		sync_nb_caps--;
+	}
+
+	return sync_nb_caps;
+}
+
+static int
+update_scheduler_capability(struct scheduler_ctx *sched_ctx)
+{
+	struct rte_cryptodev_capabilities tmp_caps[256] = { {0} };
+	uint32_t nb_caps = 0, i;
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		nb_caps = sync_caps(tmp_caps, nb_caps, dev_info.capabilities);
+		if (nb_caps == 0)
+			return -1;
+	}
+
+	sched_ctx->capabilities = rte_zmalloc_socket(NULL,
+			sizeof(struct rte_cryptodev_capabilities) *
+			(nb_caps + 1), 0, SOCKET_ID_ANY);
+	if (!sched_ctx->capabilities)
+		return -ENOMEM;
+
+	rte_memcpy(sched_ctx->capabilities, tmp_caps,
+			sizeof(struct rte_cryptodev_capabilities) * nb_caps);
+
+	return 0;
+}
+
+static void
+update_scheduler_feature_flag(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	dev->feature_flags = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+
+		dev->feature_flags |= dev_info.feature_flags;
+	}
+}
+
+static void
+update_max_nb_qp(struct scheduler_ctx *sched_ctx)
+{
+	uint32_t i;
+	uint32_t max_nb_qp;
+
+	if (!sched_ctx->nb_slaves)
+		return;
+
+	max_nb_qp = sched_ctx->nb_slaves ? UINT32_MAX : 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct rte_cryptodev_info dev_info;
+
+		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id, &dev_info);
+		max_nb_qp = dev_info.max_nb_queue_pairs < max_nb_qp ?
+				dev_info.max_nb_queue_pairs : max_nb_qp;
+	}
+
+	sched_ctx->max_nb_queue_pairs = max_nb_qp;
+}
+
+/** Attach a device to the scheduler. */
+int
+rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	struct scheduler_slave *slave;
+	struct rte_cryptodev_info dev_info;
+	uint32_t i;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+	if (sched_ctx->nb_slaves >= MAX_SLAVES_NUM) {
+		CS_LOG_ERR("Too many slaves attached");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++)
+		if (sched_ctx->slaves[i].dev_id == slave_id) {
+			CS_LOG_ERR("Slave already added");
+			return -ENOTSUP;
+		}
+
+	slave = &sched_ctx->slaves[sched_ctx->nb_slaves];
+
+	rte_cryptodev_info_get(slave_id, &dev_info);
+
+	slave->dev_id = slave_id;
+	slave->dev_type = dev_info.dev_type;
+	sched_ctx->nb_slaves++;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		slave->dev_id = 0;
+		slave->dev_type = 0;
+		sched_ctx->nb_slaves--;
+
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+	uint32_t i, slave_pos;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	for (slave_pos = 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
+		if (sched_ctx->slaves[slave_pos].dev_id == slave_id)
+			break;
+	if (slave_pos == sched_ctx->nb_slaves) {
+		CS_LOG_ERR("Cannot find slave");
+		return -ENOTSUP;
+	}
+
+	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
+		CS_LOG_ERR("Failed to detach slave");
+		return -ENOTSUP;
+	}
+
+	for (i = slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
+		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
+				sizeof(struct scheduler_slave));
+	}
+	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
+			sizeof(struct scheduler_slave));
+	sched_ctx->nb_slaves--;
+
+	if (update_scheduler_capability(sched_ctx) < 0) {
+		CS_LOG_ERR("capabilities update failed");
+		return -ENOTSUP;
+	}
+
+	update_scheduler_feature_flag(dev);
+
+	update_max_nb_qp(sched_ctx);
+
+	return 0;
+}
+
+int
+rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
+		enum rte_cryptodev_scheduler_mode mode)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	if (mode == sched_ctx->mode)
+		return 0;
+
+	switch (mode) {
+	default:
+		CS_LOG_ERR("Not yet supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+enum rte_cryptodev_scheduler_mode
+rte_crpytodev_scheduler_mode_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return sched_ctx->mode;
+}
+
+int
+rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
+		uint32_t enable_reorder)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	sched_ctx->reordering_enabled = enable_reorder;
+
+	return 0;
+}
+
+int
+rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
+{
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	return (int)sched_ctx->reordering_enabled;
+}
+
+int
+rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
+		struct rte_cryptodev_scheduler *scheduler) {
+
+	struct rte_cryptodev *dev = rte_cryptodev_pmd_get_dev(scheduler_id);
+	struct scheduler_ctx *sched_ctx;
+
+	if (!dev) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+		CS_LOG_ERR("Operation not supported");
+		return -ENOTSUP;
+	}
+
+	if (dev->data->dev_started) {
+		CS_LOG_ERR("Illegal operation");
+		return -EBUSY;
+	}
+
+	sched_ctx = dev->data->dev_private;
+
+	strncpy(sched_ctx->name, scheduler->name,
+			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN);
+	strncpy(sched_ctx->description, scheduler->description,
+			RTE_CRYPTODEV_SCHEDULER_DESC_MAX_LEN);
+
+	/* load scheduler instance operations functions */
+	sched_ctx->ops.config_queue_pair = scheduler->ops->config_queue_pair;
+	sched_ctx->ops.create_private_ctx = scheduler->ops->create_private_ctx;
+	sched_ctx->ops.scheduler_start = scheduler->ops->scheduler_start;
+	sched_ctx->ops.scheduler_stop = scheduler->ops->scheduler_stop;
+	sched_ctx->ops.slave_attach = scheduler->ops->slave_attach;
+	sched_ctx->ops.slave_detach = scheduler->ops->slave_detach;
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->ops.create_private_ctx) {
+		int ret = (*sched_ctx->ops.create_private_ctx)(dev);
+
+		if (ret < 0) {
+			CS_LOG_ERR("Unable to create scheduler private "
+					"context");
+			return ret;
+		}
+	}
+
+	sched_ctx->mode = scheduler->mode;
+
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 05/11] crypto/scheduler: add round-robin scheduling mode
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (3 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 04/11] crypto/scheduler: add scheduler API implementations Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 06/11] crypto/scheduler: register scheduler vdev driver Fan Zhang
                             ` (6 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Implements round-robin scheduling mode and register into cryptodev
scheduler ops structure. This mode enqueues a burst of operation
to one of its slaves, and iterates the next burst to the other
slave. Same procedure is done on dequeueing operations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
 drivers/crypto/scheduler/rte_cryptodev_scheduler.c |   7 +
 drivers/crypto/scheduler/rte_cryptodev_scheduler.h |   3 +
 drivers/crypto/scheduler/scheduler_roundrobin.c    | 435 +++++++++++++++++++++
 3 files changed, 445 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_roundrobin.c

diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 14f0983..11e8143 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -329,6 +329,13 @@ rte_crpytodev_scheduler_mode_set(uint8_t scheduler_id,
 		return 0;
 
 	switch (mode) {
+	case CDEV_SCHED_MODE_ROUNDROBIN:
+		if (rte_cryptodev_scheduler_load_user_scheduler(scheduler_id,
+				roundrobin_scheduler) < 0) {
+			CS_LOG_ERR("Failed to load scheduler");
+			return -1;
+		}
+		break;
 	default:
 		CS_LOG_ERR("Not yet supported");
 		return -ENOTSUP;
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
index b18fc48..7ef44e7 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
@@ -46,6 +46,7 @@ extern "C" {
 enum rte_cryptodev_scheduler_mode {
 	CDEV_SCHED_MODE_NOT_SET = 0,
 	CDEV_SCHED_MODE_USERDEFINED,
+	CDEV_SCHED_MODE_ROUNDROBIN,
 
 	CDEV_SCHED_MODE_COUNT /* number of modes */
 };
@@ -156,6 +157,8 @@ struct rte_cryptodev_scheduler {
 	struct rte_cryptodev_scheduler_ops *ops;
 };
 
+extern struct rte_cryptodev_scheduler *roundrobin_scheduler;
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
new file mode 100644
index 0000000..1f2e709
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -0,0 +1,435 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_malloc.h>
+
+#include "rte_cryptodev_scheduler_operations.h"
+#include "scheduler_pmd_private.h"
+
+struct rr_scheduler_qp_ctx {
+	struct scheduler_slave slaves[MAX_SLAVES_NUM];
+	uint32_t nb_slaves;
+
+	uint32_t last_enq_slave_idx;
+	uint32_t last_deq_slave_idx;
+};
+
+static uint16_t
+schedule_enqueue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct rte_cryptodev_sym_session *sessions[nb_ops];
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++)
+		rte_prefetch0(ops[i]->sym->session);
+
+	for (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		sessions[i] = ops[i]->sym->session;
+		sessions[i + 1] = ops[i + 1]->sym->session;
+		sessions[i + 2] = ops[i + 2]->sym->session;
+		sessions[i + 3] = ops[i + 3]->sym->session;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->session);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	rr_qp_ctx->last_enq_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	/* recover session if enqueue is failed */
+	if (unlikely(processed_ops < nb_ops)) {
+		for (i = processed_ops; i < nb_ops; i++)
+			ops[i]->sym->session = sessions[i];
+	}
+
+	return processed_ops;
+}
+
+static uint16_t
+schedule_enqueue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			gen_qp_ctx->private_qp_ctx;
+	uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
+	struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
+	uint16_t i, processed_ops;
+	struct rte_cryptodev_sym_session *sessions[nb_ops];
+	struct scheduler_session *sess0, *sess1, *sess2, *sess3;
+
+	if (unlikely(nb_ops == 0))
+		return 0;
+
+	for (i = 0; i < nb_ops && i < 4; i++) {
+		rte_prefetch0(ops[i]->sym->session);
+		rte_prefetch0(ops[i]->sym->m_src);
+	}
+
+	for (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		sess1 = (struct scheduler_session *)
+				ops[i+1]->sym->session->_private;
+		sess2 = (struct scheduler_session *)
+				ops[i+2]->sym->session->_private;
+		sess3 = (struct scheduler_session *)
+				ops[i+3]->sym->session->_private;
+
+		sessions[i] = ops[i]->sym->session;
+		sessions[i + 1] = ops[i + 1]->sym->session;
+		sessions[i + 2] = ops[i + 2]->sym->session;
+		sessions[i + 3] = ops[i + 3]->sym->session;
+
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 1]->sym->session = sess1->sessions[slave_idx];
+		ops[i + 1]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 2]->sym->session = sess2->sessions[slave_idx];
+		ops[i + 2]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+		ops[i + 3]->sym->session = sess3->sessions[slave_idx];
+		ops[i + 3]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+
+		rte_prefetch0(ops[i + 4]->sym->session);
+		rte_prefetch0(ops[i + 4]->sym->m_src);
+		rte_prefetch0(ops[i + 5]->sym->session);
+		rte_prefetch0(ops[i + 5]->sym->m_src);
+		rte_prefetch0(ops[i + 6]->sym->session);
+		rte_prefetch0(ops[i + 6]->sym->m_src);
+		rte_prefetch0(ops[i + 7]->sym->session);
+		rte_prefetch0(ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_ops; i++) {
+		sess0 = (struct scheduler_session *)
+				ops[i]->sym->session->_private;
+		ops[i]->sym->session = sess0->sessions[slave_idx];
+		ops[i]->sym->m_src->seqn = gen_qp_ctx->seqn++;
+	}
+
+	processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	slave->nb_inflight_cops += processed_ops;
+
+	rr_qp_ctx->last_enq_slave_idx += 1;
+	rr_qp_ctx->last_enq_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	/* recover session if enqueue is failed */
+	if (unlikely(processed_ops < nb_ops)) {
+		for (i = processed_ops; i < nb_ops; i++)
+			ops[i]->sym->session = sessions[i];
+	}
+
+	return processed_ops;
+}
+
+
+static uint16_t
+schedule_dequeue(void *qp_ctx, struct rte_crypto_op **ops, uint16_t nb_ops)
+{
+	struct rr_scheduler_qp_ctx *rr_qp_ctx =
+			((struct scheduler_qp_ctx *)qp_ctx)->private_qp_ctx;
+	struct scheduler_slave *slave;
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t nb_deq_ops;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, ops, nb_ops);
+
+	last_slave_idx += 1;
+	last_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	rr_qp_ctx->last_deq_slave_idx = last_slave_idx;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	return nb_deq_ops;
+}
+
+static uint16_t
+schedule_dequeue_ordering(void *qp_ctx, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *gen_qp_ctx = (struct scheduler_qp_ctx *)qp_ctx;
+	struct rr_scheduler_qp_ctx *rr_qp_ctx = (gen_qp_ctx->private_qp_ctx);
+	struct scheduler_slave *slave;
+	struct rte_reorder_buffer *reorder_buff = gen_qp_ctx->reorder_buf;
+	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint16_t nb_deq_ops, nb_drained_mbufs;
+	const uint16_t nb_op_ops = nb_ops;
+	struct rte_crypto_op *op_ops[nb_op_ops];
+	struct rte_mbuf *reorder_mbufs[nb_op_ops];
+	uint32_t last_slave_idx = rr_qp_ctx->last_deq_slave_idx;
+	uint16_t i;
+
+	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops == 0)) {
+		do {
+			last_slave_idx += 1;
+
+			if (unlikely(last_slave_idx >= rr_qp_ctx->nb_slaves))
+				last_slave_idx = 0;
+			/* looped back, means no inflight cops in the queue */
+			if (last_slave_idx == rr_qp_ctx->last_deq_slave_idx)
+				return 0;
+		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
+				== 0);
+	}
+
+	slave = &rr_qp_ctx->slaves[last_slave_idx];
+
+	nb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,
+			slave->qp_id, op_ops, nb_ops);
+
+	rr_qp_ctx->last_deq_slave_idx += 1;
+	rr_qp_ctx->last_deq_slave_idx %= rr_qp_ctx->nb_slaves;
+
+	slave->nb_inflight_cops -= nb_deq_ops;
+
+	for (i = 0; i < nb_deq_ops && i < 4; i++)
+		rte_prefetch0(op_ops[i]->sym->m_src);
+
+	for (i = 0; (i < (nb_deq_ops - 8)) && (nb_deq_ops > 8); i += 4) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf1 = op_ops[i + 1]->sym->m_src;
+		mbuf2 = op_ops[i + 2]->sym->m_src;
+		mbuf3 = op_ops[i + 3]->sym->m_src;
+
+		mbuf0->userdata = op_ops[i];
+		mbuf1->userdata = op_ops[i + 1];
+		mbuf2->userdata = op_ops[i + 2];
+		mbuf3->userdata = op_ops[i + 3];
+
+		rte_reorder_insert(reorder_buff, mbuf0);
+		rte_reorder_insert(reorder_buff, mbuf1);
+		rte_reorder_insert(reorder_buff, mbuf2);
+		rte_reorder_insert(reorder_buff, mbuf3);
+
+		rte_prefetch0(op_ops[i + 4]->sym->m_src);
+		rte_prefetch0(op_ops[i + 5]->sym->m_src);
+		rte_prefetch0(op_ops[i + 6]->sym->m_src);
+		rte_prefetch0(op_ops[i + 7]->sym->m_src);
+	}
+
+	for (; i < nb_deq_ops; i++) {
+		mbuf0 = op_ops[i]->sym->m_src;
+		mbuf0->userdata = op_ops[i];
+		rte_reorder_insert(reorder_buff, mbuf0);
+	}
+
+	nb_drained_mbufs = rte_reorder_drain(reorder_buff, reorder_mbufs,
+			nb_ops);
+	for (i = 0; i < nb_drained_mbufs && i < 4; i++)
+		rte_prefetch0(reorder_mbufs[i]);
+
+	for (i = 0; (i < (nb_drained_mbufs - 8)) && (nb_drained_mbufs > 8);
+			i += 4) {
+		ops[i] = *(struct rte_crypto_op **)reorder_mbufs[i]->userdata;
+		ops[i + 1] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 1]->userdata;
+		ops[i + 2] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 2]->userdata;
+		ops[i + 3] = *(struct rte_crypto_op **)
+			reorder_mbufs[i + 3]->userdata;
+
+		reorder_mbufs[i]->userdata = NULL;
+		reorder_mbufs[i + 1]->userdata = NULL;
+		reorder_mbufs[i + 2]->userdata = NULL;
+		reorder_mbufs[i + 3]->userdata = NULL;
+
+		rte_prefetch0(reorder_mbufs[i + 4]);
+		rte_prefetch0(reorder_mbufs[i + 5]);
+		rte_prefetch0(reorder_mbufs[i + 6]);
+		rte_prefetch0(reorder_mbufs[i + 7]);
+	}
+
+	for (; i < nb_drained_mbufs; i++) {
+		ops[i] = *(struct rte_crypto_op **)
+			reorder_mbufs[i]->userdata;
+		reorder_mbufs[i]->userdata = NULL;
+	}
+
+	return nb_drained_mbufs;
+}
+
+static int
+slave_attach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+slave_detach(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint8_t slave_id)
+{
+	return 0;
+}
+
+static int
+scheduler_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+		struct rr_scheduler_qp_ctx *rr_qp_ctx =
+				qp_ctx->private_qp_ctx;
+		uint32_t j;
+		uint16_t qp_id = rr_qp_ctx->slaves[0].qp_id;
+
+		memset(rr_qp_ctx->slaves, 0, MAX_SLAVES_NUM *
+				sizeof(struct scheduler_slave));
+		for (j = 0; j < sched_ctx->nb_slaves; j++) {
+			rr_qp_ctx->slaves[j].dev_id =
+					sched_ctx->slaves[i].dev_id;
+			rr_qp_ctx->slaves[j].qp_id = qp_id;
+		}
+
+		rr_qp_ctx->nb_slaves = sched_ctx->nb_slaves;
+
+		rr_qp_ctx->last_enq_slave_idx = 0;
+		rr_qp_ctx->last_deq_slave_idx = 0;
+
+		if (sched_ctx->reordering_enabled) {
+			qp_ctx->schedule_enqueue = &schedule_enqueue_ordering;
+			qp_ctx->schedule_dequeue = &schedule_dequeue_ordering;
+		} else {
+			qp_ctx->schedule_enqueue = &schedule_enqueue;
+			qp_ctx->schedule_dequeue = &schedule_dequeue;
+		}
+	}
+
+	return 0;
+}
+
+static int
+scheduler_stop(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+static int
+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+	struct rr_scheduler_qp_ctx *rr_qp_ctx;
+
+	rr_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*rr_qp_ctx), 0,
+			rte_socket_id());
+	if (!rr_qp_ctx) {
+		CS_LOG_ERR("failed allocate memory for private queue pair");
+		return -ENOMEM;
+	}
+
+	qp_ctx->private_qp_ctx = (void *)rr_qp_ctx;
+
+	return 0;
+}
+
+static int
+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+struct rte_cryptodev_scheduler_ops ops = {
+	slave_attach,
+	slave_detach,
+	scheduler_start,
+	scheduler_stop,
+	scheduler_config_qp,
+	scheduler_create_private_ctx
+};
+
+struct rte_cryptodev_scheduler scheduler = {
+		.name = "roundrobin-scheduler",
+		.description = "scheduler which will round robin burst across "
+				"slave crypto devices",
+		.mode = CDEV_SCHED_MODE_ROUNDROBIN,
+		.ops = &ops
+};
+
+struct rte_cryptodev_scheduler *roundrobin_scheduler = &scheduler;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 06/11] crypto/scheduler: register scheduler vdev driver
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (4 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 05/11] crypto/scheduler: add round-robin scheduling mode Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 07/11] crypto/scheduler: register operation function pointer table Fan Zhang
                             ` (5 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds crypto scheduler's PMD's probe and remove function and the device's
enqueue and dequeue burst functions. A cryptodev scheduler PMD is
then registered in the end.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/scheduler_pmd.c | 361 +++++++++++++++++++++++++++++++
 1 file changed, 361 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd.c

diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
new file mode 100644
index 0000000..62418d0
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -0,0 +1,361 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_vdev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+#include <rte_reorder.h>
+#include <rte_cryptodev_scheduler.h>
+
+#include "scheduler_pmd_private.h"
+
+struct scheduler_init_params {
+	struct rte_crypto_vdev_init_params def_p;
+	uint32_t nb_slaves;
+	uint8_t slaves[MAX_SLAVES_NUM];
+};
+
+#define RTE_CRYPTODEV_VDEV_NAME				("name")
+#define RTE_CRYPTODEV_VDEV_SLAVE			("slave")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG	("max_nb_queue_pairs")
+#define RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG	("max_nb_sessions")
+#define RTE_CRYPTODEV_VDEV_SOCKET_ID		("socket_id")
+
+const char *scheduler_valid_params[] = {
+	RTE_CRYPTODEV_VDEV_NAME,
+	RTE_CRYPTODEV_VDEV_SLAVE,
+	RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+	RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+	RTE_CRYPTODEV_VDEV_SOCKET_ID
+};
+
+static uint16_t
+scheduler_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_enqueue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static uint16_t
+scheduler_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct scheduler_qp_ctx *qp_ctx = queue_pair;
+	uint16_t processed_ops;
+
+	processed_ops = (*qp_ctx->schedule_dequeue)(qp_ctx, ops,
+			nb_ops);
+
+	return processed_ops;
+}
+
+static int
+attach_init_slaves(uint8_t scheduler_id,
+		const uint8_t *slaves, const uint8_t nb_slaves)
+{
+	uint8_t i;
+
+	for (i = 0; i < nb_slaves; i++) {
+		struct rte_cryptodev *dev =
+				rte_cryptodev_pmd_get_dev(slaves[i]);
+		int status = rte_cryptodev_scheduler_slave_attach(
+				scheduler_id, slaves[i]);
+
+		if (status < 0 || !dev) {
+			CS_LOG_ERR("Failed to attach slave cryptodev "
+					"%u.\n", slaves[i]);
+			return status;
+		}
+
+		RTE_LOG(INFO, PMD, "  Attached slave cryptodev %s\n",
+				dev->data->name);
+	}
+
+	return 0;
+}
+
+static int
+cryptodev_scheduler_create(const char *name,
+	struct scheduler_init_params *init_params)
+{
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (init_params->def_p.name[0] == '\0') {
+		int ret = rte_cryptodev_pmd_create_dev_name(
+				init_params->def_p.name,
+				RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+
+		if (ret < 0) {
+			CS_LOG_ERR("failed to create unique name");
+			return ret;
+		}
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct scheduler_ctx),
+			init_params->def_p.socket_id);
+	if (dev == NULL) {
+		CS_LOG_ERR("driver %s: failed to create cryptodev vdev",
+			name);
+		return -EFAULT;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	dev->dev_ops = rte_crypto_scheduler_pmd_ops;
+
+	dev->enqueue_burst = scheduler_enqueue_burst;
+	dev->dequeue_burst = scheduler_dequeue_burst;
+
+	sched_ctx = dev->data->dev_private;
+	sched_ctx->max_nb_queue_pairs =
+			init_params->def_p.max_nb_queue_pairs;
+
+	return attach_init_slaves(dev->data->dev_id, init_params->slaves,
+			init_params->nb_slaves);
+}
+
+static int
+cryptodev_scheduler_remove(const char *name)
+{
+	struct rte_cryptodev *dev;
+	struct scheduler_ctx *sched_ctx;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -EINVAL;
+
+	sched_ctx = dev->data->dev_private;
+
+	if (sched_ctx->nb_slaves) {
+		uint32_t i;
+
+		for (i = 0; i < sched_ctx->nb_slaves; i++)
+			rte_cryptodev_scheduler_slave_detach(dev->data->dev_id,
+					sched_ctx->slaves[i].dev_id);
+	}
+
+	RTE_LOG(INFO, PMD, "Closing Crypto Scheduler device %s on numa "
+		"socket %u\n", name, rte_socket_id());
+
+	return 0;
+}
+
+static uint8_t
+number_of_sockets(void)
+{
+	int sockets = 0;
+	int i;
+	const struct rte_memseg *ms = rte_eal_get_physmem_layout();
+
+	for (i = 0; ((i < RTE_MAX_MEMSEG) && (ms[i].addr != NULL)); i++) {
+		if (sockets < ms[i].socket_id)
+			sockets = ms[i].socket_id;
+	}
+
+	/* Number of sockets = maximum socket_id + 1 */
+	return ++sockets;
+}
+
+/** Parse integer from integer argument */
+static int
+parse_integer_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	int *i = (int *) extra_args;
+
+	*i = atoi(value);
+	if (*i < 0) {
+		CS_LOG_ERR("Argument has to be positive.\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+/** Parse name */
+static int
+parse_name_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct rte_crypto_vdev_init_params *params = extra_args;
+
+	if (strlen(value) >= RTE_CRYPTODEV_NAME_MAX_LEN - 1) {
+		CS_LOG_ERR("Invalid name %s, should be less than "
+				"%u bytes.\n", value,
+				RTE_CRYPTODEV_NAME_MAX_LEN - 1);
+		return -1;
+	}
+
+	strncpy(params->name, value, RTE_CRYPTODEV_NAME_MAX_LEN);
+
+	return 0;
+}
+
+/** Parse slave */
+static int
+parse_slave_arg(const char *key __rte_unused,
+		const char *value, void *extra_args)
+{
+	struct scheduler_init_params *param = extra_args;
+	struct rte_cryptodev *dev =
+			rte_cryptodev_pmd_get_named_dev(value);
+
+	if (!dev) {
+		RTE_LOG(ERR, PMD, "Invalid slave name %s.\n", value);
+		return -1;
+	}
+
+	if (param->nb_slaves >= MAX_SLAVES_NUM - 1) {
+		CS_LOG_ERR("Too many slaves.\n");
+		return -1;
+	}
+
+	param->slaves[param->nb_slaves] = dev->data->dev_id;
+	param->nb_slaves++;
+
+	return 0;
+}
+
+static int
+scheduler_parse_init_params(struct scheduler_init_params *params,
+		const char *input_args)
+{
+	struct rte_kvargs *kvlist = NULL;
+	int ret = 0;
+
+	if (params == NULL)
+		return -EINVAL;
+
+	if (input_args) {
+		kvlist = rte_kvargs_parse(input_args,
+				scheduler_valid_params);
+		if (kvlist == NULL)
+			return -1;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_QP_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_queue_pairs);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist,
+				RTE_CRYPTODEV_VDEV_MAX_NB_SESS_ARG,
+				&parse_integer_arg,
+				&params->def_p.max_nb_sessions);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SOCKET_ID,
+				&parse_integer_arg,
+				&params->def_p.socket_id);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_NAME,
+				&parse_name_arg,
+				&params->def_p);
+		if (ret < 0)
+			goto free_kvlist;
+
+		ret = rte_kvargs_process(kvlist, RTE_CRYPTODEV_VDEV_SLAVE,
+				&parse_slave_arg, params);
+		if (ret < 0)
+			goto free_kvlist;
+
+		if (params->def_p.socket_id >= number_of_sockets()) {
+			CDEV_LOG_ERR("Invalid socket id specified to create "
+				"the virtual crypto device on");
+			goto free_kvlist;
+		}
+	}
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+cryptodev_scheduler_probe(const char *name, const char *input_args)
+{
+	struct scheduler_init_params init_params = {
+		.def_p = {
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+			RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+			rte_socket_id(),
+			""
+		},
+		.nb_slaves = 0,
+		.slaves = {0}
+	};
+
+	scheduler_parse_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.def_p.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.def_p.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.def_p.max_nb_sessions);
+	if (init_params.def_p.name[0] != '\0')
+		RTE_LOG(INFO, PMD, "  User defined name = %s\n",
+			init_params.def_p.name);
+
+	return cryptodev_scheduler_create(name, &init_params);
+}
+
+static struct rte_vdev_driver cryptodev_scheduler_pmd_drv = {
+	.probe = cryptodev_scheduler_probe,
+	.remove = cryptodev_scheduler_remove
+};
+
+RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
+	cryptodev_scheduler_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int> "
+	"slave=<name>");
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 07/11] crypto/scheduler: register operation function pointer table
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (5 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 06/11] crypto/scheduler: register scheduler vdev driver Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system Fan Zhang
                             ` (4 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Implements all standard operations required for cryptodev,
and register them to cryptodev operation function pointer table.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/scheduler/scheduler_pmd_ops.c | 490 +++++++++++++++++++++++++++
 1 file changed, 490 insertions(+)
 create mode 100644 drivers/crypto/scheduler/scheduler_pmd_ops.c

diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
new file mode 100644
index 0000000..56624c7
--- /dev/null
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -0,0 +1,490 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+
+#include <rte_config.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_dev.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_reorder.h>
+
+#include "scheduler_pmd_private.h"
+
+/** Configure device */
+static int
+scheduler_pmd_config(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret = 0;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_configure)(slave_dev);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+static int
+update_reorder_buff(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (sched_ctx->reordering_enabled) {
+		char reorder_buff_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+		uint32_t buff_size = sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE;
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (!buff_size)
+			return 0;
+
+		if (snprintf(reorder_buff_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"%s_rb_%u_%u", RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+			dev->data->dev_id, qp_id) < 0) {
+			CS_LOG_ERR("failed to create unique reorder buffer "
+					"name");
+			return -ENOMEM;
+		}
+
+		qp_ctx->reorder_buf = rte_reorder_create(reorder_buff_name,
+				rte_socket_id(), buff_size);
+		if (!qp_ctx->reorder_buf) {
+			CS_LOG_ERR("failed to create reorder buffer");
+			return -ENOMEM;
+		}
+	} else {
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+	}
+
+	return 0;
+}
+
+/** Start device */
+static int
+scheduler_pmd_start(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = update_reorder_buff(dev, i);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to update reorder buffer");
+			return ret;
+		}
+	}
+
+	if (sched_ctx->mode == CDEV_SCHED_MODE_NOT_SET) {
+		CS_LOG_ERR("Scheduler mode is not set");
+		return -1;
+	}
+
+	if (!sched_ctx->nb_slaves) {
+		CS_LOG_ERR("No slave in the scheduler");
+		return -1;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -ENOTSUP);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
+			CS_LOG_ERR("Failed to attach slave");
+			return -ENOTSUP;
+		}
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.scheduler_start, -ENOTSUP);
+
+	if ((*sched_ctx->ops.scheduler_start)(dev) < 0) {
+		CS_LOG_ERR("Scheduler start failed");
+		return -1;
+	}
+
+	/* start all slaves */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_start)(slave_dev);
+		if (ret < 0) {
+			CS_LOG_ERR("Failed to start slave dev %u",
+					slave_dev_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+/** Stop device */
+static void
+scheduler_pmd_stop(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	if (!dev->data->dev_started)
+		return;
+
+	/* stop all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->dev_stop)(slave_dev);
+	}
+
+	if (*sched_ctx->ops.scheduler_stop)
+		(*sched_ctx->ops.scheduler_stop)(dev);
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+		if (*sched_ctx->ops.slave_detach)
+			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
+	}
+}
+
+/** Close device */
+static int
+scheduler_pmd_close(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+	int ret;
+
+	/* the dev should be stopped before being closed */
+	if (dev->data->dev_started)
+		return -EBUSY;
+
+	/* close all slaves first */
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		ret = (*slave_dev->dev_ops->dev_close)(slave_dev);
+		if (ret < 0)
+			return ret;
+	}
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];
+
+		if (qp_ctx->reorder_buf) {
+			rte_reorder_free(qp_ctx->reorder_buf);
+			qp_ctx->reorder_buf = NULL;
+		}
+
+		if (qp_ctx->private_qp_ctx) {
+			rte_free(qp_ctx->private_qp_ctx);
+			qp_ctx->private_qp_ctx = NULL;
+		}
+	}
+
+	if (sched_ctx->private_ctx)
+		rte_free(sched_ctx->private_ctx);
+
+	if (sched_ctx->capabilities)
+		rte_free(sched_ctx->capabilities);
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+scheduler_pmd_stats_get(struct rte_cryptodev *dev,
+	struct rte_cryptodev_stats *stats)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+		struct rte_cryptodev_stats slave_stats = {0};
+
+		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
+
+		stats->enqueued_count += slave_stats.enqueued_count;
+		stats->dequeued_count += slave_stats.dequeued_count;
+
+		stats->enqueue_err_count += slave_stats.enqueue_err_count;
+		stats->dequeue_err_count += slave_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+scheduler_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev *slave_dev =
+				rte_cryptodev_pmd_get_dev(slave_dev_id);
+
+		(*slave_dev->dev_ops->stats_reset)(slave_dev);
+	}
+}
+
+/** Get device info */
+static void
+scheduler_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	uint32_t max_nb_sessions = sched_ctx->nb_slaves ?
+			UINT32_MAX : RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS;
+	uint32_t i;
+
+	if (!dev_info)
+		return;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+		struct rte_cryptodev_info slave_info;
+
+		rte_cryptodev_info_get(slave_dev_id, &slave_info);
+		max_nb_sessions = slave_info.sym.max_nb_sessions <
+				max_nb_sessions ?
+				slave_info.sym.max_nb_sessions :
+				max_nb_sessions;
+	}
+
+	dev_info->dev_type = dev->dev_type;
+	dev_info->feature_flags = dev->feature_flags;
+	dev_info->capabilities = sched_ctx->capabilities;
+	dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
+	dev_info->sym.max_nb_sessions = max_nb_sessions;
+}
+
+/** Release queue pair */
+static int
+scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];
+
+	if (!qp_ctx)
+		return 0;
+
+	if (qp_ctx->reorder_buf)
+		rte_reorder_free(qp_ctx->reorder_buf);
+	if (qp_ctx->private_qp_ctx)
+		rte_free(qp_ctx->private_qp_ctx);
+
+	rte_free(qp_ctx);
+	dev->data->queue_pairs[qp_id] = NULL;
+
+	return 0;
+}
+
+/** Setup a queue pair */
+static int
+scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	__rte_unused const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+	struct scheduler_qp_ctx *qp_ctx;
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"CRYTO_SCHE PMD %u QP %u",
+			dev->data->dev_id, qp_id) < 0) {
+		CS_LOG_ERR("Failed to create unique queue pair name");
+		return -EFAULT;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		scheduler_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp_ctx = rte_zmalloc_socket(name, sizeof(*qp_ctx), RTE_CACHE_LINE_SIZE,
+			socket_id);
+	if (qp_ctx == NULL)
+		return -ENOMEM;
+
+	dev->data->queue_pairs[qp_id] = qp_ctx;
+
+	if (*sched_ctx->ops.config_queue_pair) {
+		if ((*sched_ctx->ops.config_queue_pair)(dev, qp_id) < 0) {
+			CS_LOG_ERR("Unable to configure queue pair");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+/** Start queue pair */
+static int
+scheduler_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+scheduler_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+scheduler_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+static uint32_t
+scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct scheduler_session);
+}
+
+static int
+config_slave_sess(struct scheduler_ctx *sched_ctx,
+		struct rte_crypto_sym_xform *xform,
+		struct scheduler_session *sess,
+		uint32_t create)
+{
+	uint32_t i;
+
+	for (i = 0; i < sched_ctx->nb_slaves; i++) {
+		struct scheduler_slave *slave = &sched_ctx->slaves[i];
+		struct rte_cryptodev *dev =
+				rte_cryptodev_pmd_get_dev(slave->dev_id);
+
+		if (sess->sessions[i]) {
+			if (create)
+				continue;
+			/* !create */
+			(*dev->dev_ops->session_clear)(dev,
+					(void *)sess->sessions[i]);
+			sess->sessions[i] = NULL;
+		} else {
+			if (!create)
+				continue;
+			/* create */
+			sess->sessions[i] =
+					rte_cryptodev_sym_session_create(
+							slave->dev_id, xform);
+			if (!sess->sessions[i]) {
+				config_slave_sess(sched_ctx, NULL, sess, 0);
+				return -1;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+scheduler_pmd_session_clear(struct rte_cryptodev *dev,
+	void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	config_slave_sess(sched_ctx, NULL, sess, 0);
+
+	memset(sess, 0, sizeof(struct scheduler_session));
+}
+
+static void *
+scheduler_pmd_session_configure(struct rte_cryptodev *dev,
+	struct rte_crypto_sym_xform *xform, void *sess)
+{
+	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+	if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
+		CS_LOG_ERR("unabled to config sym session");
+		return NULL;
+	}
+
+	return sess;
+}
+
+struct rte_cryptodev_ops scheduler_pmd_ops = {
+		.dev_configure		= scheduler_pmd_config,
+		.dev_start		= scheduler_pmd_start,
+		.dev_stop		= scheduler_pmd_stop,
+		.dev_close		= scheduler_pmd_close,
+
+		.stats_get		= scheduler_pmd_stats_get,
+		.stats_reset		= scheduler_pmd_stats_reset,
+
+		.dev_infos_get		= scheduler_pmd_info_get,
+
+		.queue_pair_setup	= scheduler_pmd_qp_setup,
+		.queue_pair_release	= scheduler_pmd_qp_release,
+		.queue_pair_start	= scheduler_pmd_qp_start,
+		.queue_pair_stop	= scheduler_pmd_qp_stop,
+		.queue_pair_count	= scheduler_pmd_qp_count,
+
+		.session_get_size	= scheduler_pmd_session_get_size,
+		.session_configure	= scheduler_pmd_session_configure,
+		.session_clear		= scheduler_pmd_session_clear,
+};
+
+struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (6 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 07/11] crypto/scheduler: register operation function pointer table Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 09/11] crypto/scheduler: add scheduler PMD config options Fan Zhang
                             ` (3 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds Makefile for scheduler cryptodev PMD, and updates existing
Makefiles. Different than other cryptodev PMDs, scheduler PMD
is required to be built as shared libraries.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/crypto/Makefile           |  3 +-
 drivers/crypto/scheduler/Makefile | 66 +++++++++++++++++++++++++++++++++++++++
 mk/rte.app.mk                     |  6 +++-
 3 files changed, 73 insertions(+), 2 deletions(-)
 create mode 100644 drivers/crypto/scheduler/Makefile

diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 77b02cf..a5a246b 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO) += armv8
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL) += openssl
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_ZUC) += zuc
diff --git a/drivers/crypto/scheduler/Makefile b/drivers/crypto/scheduler/Makefile
new file mode 100644
index 0000000..0cce6f2
--- /dev/null
+++ b/drivers/crypto/scheduler/Makefile
@@ -0,0 +1,66 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_crypto_scheduler.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_crypto_scheduler_version.map
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_cryptodev_scheduler_operations.h
+SYMLINK-y-include += rte_cryptodev_scheduler.h
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_pmd_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += rte_cryptodev_scheduler.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += scheduler_roundrobin.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_cryptodev
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_kvargs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += lib/librte_reorder
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index a5daa84..0d0a970 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -70,7 +70,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT)           += -lrte_port
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PDUMP)          += -lrte_pdump
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)    += -lrte_distributor
-_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag
 _LDLIBS-$(CONFIG_RTE_LIBRTE_METER)          += -lrte_meter
 _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED)          += -lrte_sched
@@ -99,10 +98,15 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE)        += -lrte_cmdline
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
+_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER)        += -lrte_reorder
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
 
+ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER) += -lrte_pmd_crypto_scheduler
+endif
+
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 09/11] crypto/scheduler: add scheduler PMD config options
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (7 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system Fan Zhang
@ 2017-01-24 16:23           ` Fan Zhang
  2017-01-24 16:24           ` [dpdk-dev] [PATCH v7 10/11] app/test: add unit test for cryptodev scheduler PMD Fan Zhang
                             ` (2 subsequent siblings)
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:23 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds scheduler PMD enable and debug flags to config/common_base.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 config/common_base | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/config/common_base b/config/common_base
index b9fb8e2..cd4a0f3 100644
--- a/config/common_base
+++ b/config/common_base
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+#   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
 #   All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
@@ -434,6 +434,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC=n
 CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
 
 #
+# Compile PMD for Crypto Scheduler device
+#
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=n
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER_DEBUG=n
+
+#
 # Compile PMD for NULL Crypto device
 #
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 10/11] app/test: add unit test for cryptodev scheduler PMD
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (8 preceding siblings ...)
  2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 09/11] crypto/scheduler: add scheduler PMD config options Fan Zhang
@ 2017-01-24 16:24           ` Fan Zhang
  2017-01-24 16:24           ` [dpdk-dev] [PATCH v7 11/11] crypto/scheduler: add documentation Fan Zhang
  2017-01-24 16:29           ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd De Lara Guarch, Pablo
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:24 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Same as other cryptodev PMDs, it is necessary to carry out the unit
test for scheduler PMD. Currently the test is designed to attach 2
AESNI-MB cryptodev PMDs as slaves, sets the scheduling mode as round-
robin, and runs almost all AESNI-MB test items (except for sessionless
tests). In the end, the slaves are detached.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 app/test/test_cryptodev.c                   | 241 +++++++++++++++++++++++++++-
 app/test/test_cryptodev_aes_test_vectors.h  | 101 ++++++++----
 app/test/test_cryptodev_blockcipher.c       |   6 +-
 app/test/test_cryptodev_blockcipher.h       |   3 +-
 app/test/test_cryptodev_hash_test_vectors.h |  38 +++--
 5 files changed, 338 insertions(+), 51 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 0f0cf4d..357a92e 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2015-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -40,6 +40,11 @@
 #include <rte_cryptodev.h>
 #include <rte_cryptodev_pmd.h>
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+#include <rte_cryptodev_scheduler.h>
+#include <rte_cryptodev_scheduler_operations.h>
+#endif
+
 #include "test.h"
 #include "test_cryptodev.h"
 
@@ -159,7 +164,7 @@ testsuite_setup(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct rte_cryptodev_info info;
-	unsigned i, nb_devs, dev_id;
+	uint32_t i = 0, nb_devs, dev_id;
 	int ret;
 	uint16_t qp_id;
 
@@ -370,6 +375,29 @@ testsuite_setup(void)
 		}
 	}
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+	if (gbl_cryptodev_type == RTE_CRYPTODEV_SCHEDULER_PMD) {
+
+#ifndef RTE_LIBRTE_PMD_AESNI_MB
+		RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
+			" enabled in config file to run this testsuite.\n");
+		return TEST_FAILED;
+#endif
+		nb_devs = rte_cryptodev_count_devtype(
+				RTE_CRYPTODEV_SCHEDULER_PMD);
+		if (nb_devs < 1) {
+			ret = rte_eal_vdev_init(
+				RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
+				NULL);
+
+			TEST_ASSERT(ret == 0,
+				"Failed to create instance %u of"
+				" pmd : %s",
+				i, RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+		}
+	}
+#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
+
 #ifndef RTE_LIBRTE_PMD_QAT
 	if (gbl_cryptodev_type == RTE_CRYPTODEV_QAT_SYM_PMD) {
 		RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
@@ -1535,6 +1563,58 @@ test_AES_chain_mb_all(void)
 	return TEST_SUCCESS;
 }
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+
+static int
+test_AES_cipheronly_scheduler_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_SCHEDULER_PMD,
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_AES_chain_scheduler_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_SCHEDULER_PMD,
+		BLKCIPHER_AES_CHAIN_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_authonly_scheduler_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool, ts_params->valid_devs[0],
+		RTE_CRYPTODEV_SCHEDULER_PMD,
+		BLKCIPHER_AUTHONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
+
 static int
 test_AES_chain_openssl_all(void)
 {
@@ -7292,6 +7372,150 @@ auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt(void)
 			&aes128cbc_hmac_sha1_test_vector);
 }
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+
+/* global AESNI slave IDs for the scheduler test */
+uint8_t aesni_ids[2];
+
+static int
+test_scheduler_attach_slave_op(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint8_t sched_id = ts_params->valid_devs[0];
+	uint32_t nb_devs, qp_id, i, nb_devs_attached = 0;
+	int ret;
+	struct rte_cryptodev_config config = {
+			.nb_queue_pairs = 8,
+			.socket_id = SOCKET_ID_ANY,
+			.session_mp = {
+				.nb_objs = 2048,
+				.cache_size = 256
+			}
+	};
+	struct rte_cryptodev_qp_conf qp_conf = {2048};
+
+	/* create 2 AESNI_MB if necessary */
+	nb_devs = rte_cryptodev_count_devtype(
+			RTE_CRYPTODEV_AESNI_MB_PMD);
+	if (nb_devs < 2) {
+		for (i = nb_devs; i < 2; i++) {
+			ret = rte_eal_vdev_init(
+				RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
+
+			TEST_ASSERT(ret == 0,
+				"Failed to create instance %u of"
+				" pmd : %s",
+				i, RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+		}
+	}
+
+	/* attach 2 AESNI_MB cdevs */
+	for (i = 0; i < rte_cryptodev_count() && nb_devs_attached < 2;
+			i++) {
+		struct rte_cryptodev_info info;
+
+		rte_cryptodev_info_get(i, &info);
+		if (info.dev_type != RTE_CRYPTODEV_AESNI_MB_PMD)
+			continue;
+
+		ret = rte_cryptodev_configure(i, &config);
+		TEST_ASSERT(ret == 0,
+			"Failed to configure device %u of pmd : %s", i,
+			RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+		for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
+			TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+				i, qp_id, &qp_conf,
+				rte_cryptodev_socket_id(i)),
+				"Failed to setup queue pair %u on "
+				"cryptodev %u", qp_id, i);
+		}
+
+		ret = rte_cryptodev_scheduler_slave_attach(sched_id,
+				(uint8_t)i);
+
+		TEST_ASSERT(ret == 0,
+			"Failed to attach device %u of pmd : %s", i,
+			RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+		aesni_ids[nb_devs_attached] = (uint8_t)i;
+
+		nb_devs_attached++;
+	}
+
+	return 0;
+}
+
+static int
+test_scheduler_detach_slave_op(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint8_t sched_id = ts_params->valid_devs[0];
+	uint32_t i;
+	int ret;
+
+	for (i = 0; i < 2; i++) {
+		ret = rte_cryptodev_scheduler_slave_detach(sched_id,
+				aesni_ids[i]);
+		TEST_ASSERT(ret == 0,
+			"Failed to detach device %u", aesni_ids[i]);
+	}
+
+	return 0;
+}
+
+static int
+test_scheduler_mode_op(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	uint8_t sched_id = ts_params->valid_devs[0];
+	struct rte_cryptodev_scheduler_ops op = {0};
+	struct rte_cryptodev_scheduler dummy_scheduler = {
+		.description = "dummy scheduler to test mode",
+		.name = "dummy scheduler",
+		.mode = CDEV_SCHED_MODE_USERDEFINED,
+		.ops = &op
+	};
+	int ret;
+
+	/* set user defined mode */
+	ret = rte_cryptodev_scheduler_load_user_scheduler(sched_id,
+			&dummy_scheduler);
+	TEST_ASSERT(ret == 0,
+		"Failed to set cdev %u to user defined mode", sched_id);
+
+	/* set round robin mode */
+	ret = rte_crpytodev_scheduler_mode_set(sched_id,
+			CDEV_SCHED_MODE_ROUNDROBIN);
+	TEST_ASSERT(ret == 0,
+		"Failed to set cdev %u to round-robin mode", sched_id);
+	TEST_ASSERT(rte_crpytodev_scheduler_mode_get(sched_id) ==
+			CDEV_SCHED_MODE_ROUNDROBIN, "Scheduling Mode "
+					"not match");
+
+	return 0;
+}
+
+static struct unit_test_suite cryptodev_scheduler_testsuite  = {
+	.suite_name = "Crypto Device Scheduler Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, test_scheduler_attach_slave_op),
+		TEST_CASE_ST(NULL, NULL, test_scheduler_mode_op),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_chain_scheduler_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_scheduler_all),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_authonly_scheduler_all),
+		TEST_CASE_ST(NULL, NULL, test_scheduler_detach_slave_op),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
+
 static struct unit_test_suite cryptodev_qat_testsuite  = {
 	.suite_name = "Crypto QAT Unit Test Suite",
 	.setup = testsuite_setup,
@@ -7973,6 +8197,19 @@ test_cryptodev_armv8(void)
 	return unit_test_suite_runner(&cryptodev_armv8_testsuite);
 }
 
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+
+static int
+test_cryptodev_scheduler(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_cryptodev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+	return unit_test_suite_runner(&cryptodev_scheduler_testsuite);
+}
+
+REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
+
+#endif
+
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
 REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
 REGISTER_TEST_COMMAND(cryptodev_openssl_autotest, test_cryptodev_openssl);
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index f0f37ed..f3fbef1 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -1,7 +1,7 @@
 /*
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -924,7 +924,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CTR HMAC-SHA1 Decryption Digest "
@@ -933,21 +934,24 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR XCBC Encryption Digest",
 		.test_data = &aes_test_data_2,
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR XCBC Decryption Digest Verify",
 		.test_data = &aes_test_data_2,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
@@ -957,7 +961,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
 			BLOCKCIPHER_TEST_FEATURE_OOP,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest",
@@ -965,7 +970,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR HMAC-SHA1 Decryption Digest "
@@ -974,7 +980,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest",
@@ -983,7 +990,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1001,7 +1009,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 			BLOCKCIPHER_TEST_FEATURE_OOP,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -1011,7 +1020,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -1027,7 +1037,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA256 Encryption Digest "
@@ -1044,7 +1055,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8 |
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA256 Decryption Digest "
@@ -1059,7 +1071,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
@@ -1088,7 +1101,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
@@ -1099,21 +1113,24 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 			BLOCKCIPHER_TEST_FEATURE_OOP,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC XCBC Encryption Digest",
 		.test_data = &aes_test_data_7,
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC XCBC Decryption Digest Verify",
 		.test_data = &aes_test_data_7,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1141,7 +1158,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA224 Decryption Digest "
@@ -1150,7 +1168,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA384 Encryption Digest",
@@ -1158,7 +1177,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA384 Decryption Digest "
@@ -1167,7 +1187,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_QAT
+			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1197,7 +1218,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CBC Decryption",
@@ -1205,7 +1227,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CBC Encryption",
@@ -1213,7 +1236,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CBC Encryption Scater gather",
@@ -1229,7 +1253,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CBC Encryption",
@@ -1237,7 +1262,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CBC Decryption",
@@ -1245,7 +1271,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CTR Encryption",
@@ -1253,7 +1280,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-128-CTR Decryption",
@@ -1261,7 +1289,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR Encryption",
@@ -1269,7 +1298,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-192-CTR Decryption",
@@ -1277,7 +1307,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR Encryption",
@@ -1285,7 +1316,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "AES-256-CTR Decryption",
@@ -1293,7 +1325,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
 		.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 };
 
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index a48540c..da87368 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2015-2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2015-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -106,6 +106,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 		digest_len = tdata->digest.len;
 		break;
 	case RTE_CRYPTODEV_AESNI_MB_PMD:
+	case RTE_CRYPTODEV_SCHEDULER_PMD:
 		digest_len = tdata->digest.truncated_len;
 		break;
 	default:
@@ -649,6 +650,9 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
 	case RTE_CRYPTODEV_ARMV8_PMD:
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8;
 		break;
+	case RTE_CRYPTODEV_SCHEDULER_PMD:
+		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
+		break;
 	default:
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 		break;
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 91e9858..053aaa1 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -51,6 +51,7 @@
 #define BLOCKCIPHER_TEST_TARGET_PMD_QAT			0x0002 /* QAT flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL	0x0004 /* SW OPENSSL flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_ARMV8	0x0008 /* ARMv8 flag */
+#define BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER	0x0010 /* Scheduler */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)
diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h
index a8f9da0..3214f9a 100644
--- a/app/test/test_cryptodev_hash_test_vectors.h
+++ b/app/test/test_cryptodev_hash_test_vectors.h
@@ -1,7 +1,7 @@
 /*
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -365,14 +365,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_md5_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-MD5 Digest Verify",
 		.test_data = &hmac_md5_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA1 Digest",
@@ -391,14 +393,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha1_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA1 Digest Verify",
 		.test_data = &hmac_sha1_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA224 Digest",
@@ -417,14 +421,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha224_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA224 Digest Verify",
 		.test_data = &hmac_sha224_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA256 Digest",
@@ -443,14 +449,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha256_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA256 Digest Verify",
 		.test_data = &hmac_sha256_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA384 Digest",
@@ -469,14 +477,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha384_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA384 Digest Verify",
 		.test_data = &hmac_sha384_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "SHA512 Digest",
@@ -495,14 +505,16 @@ static const struct blockcipher_test_case hash_test_cases[] = {
 		.test_data = &hmac_sha512_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 	{
 		.test_descr = "HMAC-SHA512 Digest Verify",
 		.test_data = &hmac_sha512_test_vector,
 		.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
-			BLOCKCIPHER_TEST_TARGET_PMD_MB
+			BLOCKCIPHER_TEST_TARGET_PMD_MB |
+			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER
 	},
 };
 
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [dpdk-dev] [PATCH v7 11/11] crypto/scheduler: add documentation
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (9 preceding siblings ...)
  2017-01-24 16:24           ` [dpdk-dev] [PATCH v7 10/11] app/test: add unit test for cryptodev scheduler PMD Fan Zhang
@ 2017-01-24 16:24           ` Fan Zhang
  2017-01-24 16:29           ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd De Lara Guarch, Pablo
  11 siblings, 0 replies; 42+ messages in thread
From: Fan Zhang @ 2017-01-24 16:24 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, pablo.de.lara.guarch

Adds the description of the cryptodev scheduler PMD overview,
limitations, build, instructions, modes, etc.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 doc/guides/cryptodevs/img/scheduler-overview.svg | 277 +++++++++++++++++++++++
 doc/guides/cryptodevs/index.rst                  |   3 +-
 doc/guides/cryptodevs/scheduler.rst              | 128 +++++++++++
 3 files changed, 407 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/cryptodevs/img/scheduler-overview.svg
 create mode 100644 doc/guides/cryptodevs/scheduler.rst

diff --git a/doc/guides/cryptodevs/img/scheduler-overview.svg b/doc/guides/cryptodevs/img/scheduler-overview.svg
new file mode 100644
index 0000000..82bb775
--- /dev/null
+++ b/doc/guides/cryptodevs/img/scheduler-overview.svg
@@ -0,0 +1,277 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export scheduler-fan.svg Page-1 -->
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+		xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="6.81229in" height="3.40992in"
+		viewBox="0 0 490.485 245.514" xml:space="preserve" color-interpolation-filters="sRGB" class="st10">
+	<v:documentProperties v:langID="1033" v:metric="true" v:viewMarkup="false"/>
+
+	<style type="text/css">
+	<![CDATA[
+		.st1 {visibility:visible}
+		.st2 {fill:#fec000;fill-opacity:0.25;filter:url(#filter_2);stroke:#fec000;stroke-opacity:0.25}
+		.st3 {fill:#cc3399;stroke:#ff8c00;stroke-width:3}
+		.st4 {fill:#ffffff;font-family:Calibri;font-size:1.33333em}
+		.st5 {fill:#ff9900;stroke:#ff8c00;stroke-width:3}
+		.st6 {fill:#ffffff;font-family:Calibri;font-size:1.33333em;font-weight:bold}
+		.st7 {fill:#ffc000;stroke:#ffffff;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.5}
+		.st8 {marker-end:url(#mrkr4-40);stroke:#ff0000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5}
+		.st9 {fill:#ff0000;fill-opacity:1;stroke:#ff0000;stroke-opacity:1;stroke-width:0.37313432835821}
+		.st10 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+	]]>
+	</style>
+
+	<defs id="Markers">
+		<g id="lend4">
+			<path d="M 2 1 L 0 0 L 2 -1 L 2 1 " style="stroke:none"/>
+		</g>
+		<marker id="mrkr4-40" class="st9" v:arrowType="4" v:arrowSize="2" v:setback="5.36" refX="-5.36" orient="auto"
+				markerUnits="strokeWidth" overflow="visible">
+			<use xlink:href="#lend4" transform="scale(-2.68,-2.68) "/>
+		</marker>
+	</defs>
+	<defs id="Filters">
+		<filter id="filter_2">
+			<feGaussianBlur stdDeviation="2"/>
+		</filter>
+	</defs>
+	<g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+		<title>Page-1</title>
+		<v:pageProperties v:drawingScale="0.0393701" v:pageScale="0.0393701" v:drawingUnits="24" v:shadowOffsetX="8.50394"
+				v:shadowOffsetY="-8.50394"/>
+		<v:layer v:name="Connector" v:index="0"/>
+		<g id="shape31-1" v:mID="31" v:groupContext="shape" transform="translate(4.15435,-179.702)">
+			<title>Rounded Rectangle.55</title>
+			<desc>User Application</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="240.743" cy="214.108" width="481.49" height="62.8119"/>
+			<g id="shadow31-2" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 193.75 A11.0507 11.0507 -180
+							 0 0 470.43 182.7 L11.05 182.7 A11.0507 11.0507 -180 0 0 -0 193.75 L0 234.46 A11.0507 11.0507 -180 0
+							 0 11.05 245.51 Z" class="st2"/>
+			</g>
+			<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 193.75 A11.0507 11.0507 -180 0
+						 0 470.43 182.7 L11.05 182.7 A11.0507 11.0507 -180 0 0 -0 193.75 L0 234.46 A11.0507 11.0507 -180 0 0 11.05
+						 245.51 Z" class="st3"/>
+			<text x="187.04" y="218.91" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>User Application</text>		</g>
+		<g id="shape135-7" v:mID="135" v:groupContext="shape" transform="translate(4.15435,-6.4728)">
+			<title>Rounded Rectangle.135</title>
+			<desc>Cryptodev</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="72.0307" cy="230.549" width="144.07" height="29.9308"/>
+			<g id="shadow135-8" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180
+							 0 0 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0
+							 0 3.31 245.51 Z" class="st2"/>
+			</g>
+			<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180 0 0
+						 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0 0 3.31 245.51
+						 Z" class="st5"/>
+			<text x="38.46" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev</text>		</g>
+		<g id="shape136-13" v:mID="136" v:groupContext="shape" transform="translate(172.866,-6.4728)">
+			<title>Rounded Rectangle.136</title>
+			<desc>Cryptodev</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="72.0307" cy="230.549" width="144.07" height="29.9308"/>
+			<g id="shadow136-14" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180
+							 0 0 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0
+							 0 3.31 245.51 Z" class="st2"/>
+			</g>
+			<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180 0 0
+						 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0 0 3.31 245.51
+						 Z" class="st5"/>
+			<text x="38.46" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev</text>		</g>
+		<g id="shape137-19" v:mID="137" v:groupContext="shape" transform="translate(341.578,-6.4728)">
+			<title>Rounded Rectangle.137</title>
+			<desc>Cryptodev</desc>
+			<v:userDefs>
+				<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+				<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+				<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+				<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.045922865409173):1"/>
+				<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+			</v:userDefs>
+			<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+			<v:textRect cx="72.0307" cy="230.549" width="144.07" height="29.9308"/>
+			<g id="shadow137-20" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+					transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+				<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180
+							 0 0 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0
+							 0 3.31 245.51 Z" class="st2"/>
+			</g>
+			<path d="M3.31 245.51 L140.76 245.51 A3.30639 3.30639 -180 0 0 144.06 242.21 L144.06 218.89 A3.30639 3.30639 -180 0 0
+						 140.76 215.58 L3.31 215.58 A3.30639 3.30639 -180 0 0 0 218.89 L0 242.21 A3.30639 3.30639 -180 0 0 3.31 245.51
+						 Z" class="st5"/>
+			<text x="38.46" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev</text>		</g>
+		<g id="group139-25" transform="translate(4.15435,-66.8734)" v:mID="139" v:groupContext="group">
+			<title>Sheet.139</title>
+			<g id="shape33-26" v:mID="33" v:groupContext="shape">
+				<title>Rounded Rectangle.40</title>
+				<desc>Cryptodev Scheduler</desc>
+				<v:userDefs>
+					<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15348434426561):1"/>
+					<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+				</v:userDefs>
+				<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197" v:verticalAlign="0"/>
+				<v:textRect cx="240.743" cy="204.056" width="481.49" height="82.916"/>
+				<g id="shadow33-27" v:groupContext="shadow" v:shadowOffsetX="0.3456" v:shadowOffsetY="-1.9728" v:shadowType="1"
+						transform="matrix(1,0,0,1,0.3456,1.9728)" class="st1">
+					<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 173.65 A11.0507 11.0507
+								 -180 0 0 470.43 162.6 L11.05 162.6 A11.0507 11.0507 -180 0 0 0 173.65 L0 234.46 A11.0507 11.0507
+								 -180 0 0 11.05 245.51 Z" class="st2"/>
+				</g>
+				<path d="M11.05 245.51 L470.43 245.51 A11.0507 11.0507 -180 0 0 481.49 234.46 L481.49 173.65 A11.0507 11.0507 -180
+							 0 0 470.43 162.6 L11.05 162.6 A11.0507 11.0507 -180 0 0 0 173.65 L0 234.46 A11.0507 11.0507 -180 0 0
+							 11.05 245.51 Z" class="st5"/>
+				<text x="171.72" y="181" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Cryptodev Scheduler</text>			</g>
+			<g id="shape138-32" v:mID="138" v:groupContext="shape" transform="translate(24.6009,-12.5889)">
+				<title>Rounded Rectangle.138</title>
+				<desc>Crypto Op Distribution Mechanism</desc>
+				<v:userDefs>
+					<v:ud v:nameU="CTypeTopLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeTopRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotLeftSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CTypeBotRightSnip" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="CornerLockHoriz" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockVert" v:prompt="" v:val="VT0(1):5"/>
+					<v:ud v:nameU="CornerLockDiag" v:prompt="" v:val="VT0(0):5"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.15748031496063):24"/>
+					<v:ud v:nameU="visVersion" v:prompt="" v:val="VT0(15):26"/>
+					<v:ud v:nameU="TopLeftOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="TopRightOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="BotLeftOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="BotRightOffset" v:prompt="" v:val="VT0(0.13780016666367):1"/>
+					<v:ud v:nameU="msvThemeColors" v:val="VT0(254):26"/>
+				</v:userDefs>
+				<v:textBlock v:margins="rect(4,4,4,4)" v:tabSpace="42.5197"/>
+				<v:textRect cx="216.142" cy="230.549" width="432.29" height="29.9308"/>
+				<path d="M9.92 245.51 L422.36 245.51 A9.92145 9.92145 -180 0 0 432.28 235.59 L432.28 225.51 A9.92145 9.92145 -180
+							 0 0 422.36 215.58 L9.92 215.58 A9.92145 9.92145 -180 0 0 0 225.51 L0 235.59 A9.92145 9.92145 -180 0
+							 0 9.92 245.51 Z" class="st7"/>
+				<text x="103.11" y="235.35" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Crypto Op Distribution Mechanism</text>			</g>
+		</g>
+		<g id="shape140-35" v:mID="140" v:groupContext="shape" v:layerMember="0" transform="translate(234.378,-149.789)">
+			<title>Dynamic connector.229</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape141-41" v:mID="141" v:groupContext="shape" v:layerMember="0" transform="translate(248.551,-179.702)">
+			<title>Dynamic connector.141</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+		<g id="shape142-46" v:mID="142" v:groupContext="shape" v:layerMember="0" transform="translate(71.3856,-35.6203)">
+			<title>Dynamic connector.142</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape143-51" v:mID="143" v:groupContext="shape" v:layerMember="0" transform="translate(85.5588,-65.5333)">
+			<title>Dynamic connector.143</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+		<g id="shape144-56" v:mID="144" v:groupContext="shape" v:layerMember="0" transform="translate(234.378,-35.6203)">
+			<title>Dynamic connector.144</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape145-61" v:mID="145" v:groupContext="shape" v:layerMember="0" transform="translate(248.551,-65.5333)">
+			<title>Dynamic connector.145</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+		<g id="shape146-66" v:mID="146" v:groupContext="shape" v:layerMember="0" transform="translate(397.37,-34.837)">
+			<title>Dynamic connector.146</title>
+			<path d="M7.09 245.51 L7.09 223.64" class="st8"/>
+		</g>
+		<g id="shape147-71" v:mID="147" v:groupContext="shape" v:layerMember="0" transform="translate(411.543,-64.75)">
+			<title>Dynamic connector.147</title>
+			<path d="M7.09 245.51 L7.09 267.39" class="st8"/>
+		</g>
+	</g>
+</svg>
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 06c3f6e..0b50600 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -1,5 +1,5 @@
 ..  BSD LICENSE
-    Copyright(c) 2015 - 2016 Intel Corporation. All rights reserved.
+    Copyright(c) 2015 - 2017 Intel Corporation. All rights reserved.
 
     Redistribution and use in source and binary forms, with or without
     modification, are permitted provided that the following conditions
@@ -42,6 +42,7 @@ Crypto Device Drivers
     kasumi
     openssl
     null
+    scheduler
     snow3g
     qat
     zuc
diff --git a/doc/guides/cryptodevs/scheduler.rst b/doc/guides/cryptodevs/scheduler.rst
new file mode 100644
index 0000000..70fb62e
--- /dev/null
+++ b/doc/guides/cryptodevs/scheduler.rst
@@ -0,0 +1,128 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Intel Corporation. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Cryptodev Scheduler Poll Mode Driver Library
+============================================
+
+Scheduler PMD is a software crypto PMD, which has the capabilities of
+attaching hardware and/or software cryptodevs, and distributes ingress
+crypto ops among them in a certain manner.
+
+.. figure:: img/scheduler-overview.*
+
+   Cryptodev Scheduler Overview
+
+
+The Cryptodev Scheduler PMD library (**librte_pmd_crypto_scheduler**) acts as
+a software crypto PMD and shares the same API provided by librte_cryptodev.
+The PMD supports attaching multiple crypto PMDs, software or hardware, as
+slaves, and distributes the crypto workload to them with certain behavior.
+The behaviors are categorizes as different "modes". Basically, a scheduling
+mode defines certain actions for scheduling crypto ops to its slaves.
+
+The librte_pmd_crypto_scheduler library exports a C API which provides an API
+for attaching/detaching slaves, set/get scheduling modes, and enable/disable
+crypto ops reordering.
+
+Limitations
+-----------
+
+* Sessionless crypto operation is not supported
+* OOP crypto operation is not supported when the crypto op reordering feature
+  is enabled.
+
+
+Installation
+------------
+
+To build DPDK with CRYTPO_SCHEDULER_PMD the user is required to set
+CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER=y in config/common_base, and
+recompile DPDK
+
+
+Initialization
+--------------
+
+To use the PMD in an application, user must:
+
+* Call rte_eal_vdev_init("crpyto_scheduler") within the application.
+
+* Use --vdev="crpyto_scheduler" in the EAL options, which will call
+  rte_eal_vdev_init() internally.
+
+
+The following parameters (all optional) can be provided in the previous
+two calls:
+
+* socket_id: Specify the socket where the memory for the device is going
+  to be allocated (by default, socket_id will be the socket where the core
+  that is creating the PMD is running on).
+
+* max_nb_sessions: Specify the maximum number of sessions that can be
+  created. This value may be overwritten internally if there are too
+  many devices are attached.
+
+* slave: If a cryptodev has been initialized with specific name, it can be
+  attached to the scheduler using this parameter, simply filling the name
+  here. Multiple cryptodevs can be attached initially by presenting this
+  parameter multiple times.
+
+Example:
+
+.. code-block:: console
+
+    ... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_mb_pmd,name=aesni_mb_2" --vdev "crypto_scheduler_pmd,slave=aesni_mb_1,slave=aesni_mb_2" ...
+
+.. note::
+
+    * The scheduler cryptodev cannot be started unless the scheduling mode
+      is set and at least one slave is attached. Also, to configure the
+      scheduler in the run-time, like attach/detach slave(s), change
+      scheduling mode, or enable/disable crypto op ordering, one should stop
+      the scheduler first, otherwise an error will be returned.
+
+    * The crypto op reordering feature requires using the userdata field of
+      every mbuf to be processed to store temporary data. By the end of
+      processing, the field is set to pointing to NULL, any previously
+      stored value of this field will be lost.
+
+
+Cryptodev Scheduler Modes Overview
+----------------------------------
+
+Currently the Crypto Scheduler PMD library supports following modes of
+operation:
+
+*   **CDEV_SCHED_MODE_ROUNDROBIN:**
+
+   Round-robin mode, which distributes the enqueued burst of crypto ops
+   among its slaves in a round-robin manner. This mode may help to fill
+   the throughput gap between the physical core and the existing cryptodevs
+   to increase the overall performance.
-- 
2.7.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
                             ` (10 preceding siblings ...)
  2017-01-24 16:24           ` [dpdk-dev] [PATCH v7 11/11] crypto/scheduler: add documentation Fan Zhang
@ 2017-01-24 16:29           ` De Lara Guarch, Pablo
  2017-01-25 14:03             ` De Lara Guarch, Pablo
  11 siblings, 1 reply; 42+ messages in thread
From: De Lara Guarch, Pablo @ 2017-01-24 16:29 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: Doherty, Declan



> -----Original Message-----
> From: Zhang, Roy Fan
> Sent: Tuesday, January 24, 2017 4:24 PM
> To: dev@dpdk.org
> Cc: Doherty, Declan; De Lara Guarch, Pablo
> Subject: [PATCH v7 00/11] crypto/scheduler: add driver for scheduler
> crypto pmd
> 
...

> 
> Fan Zhang (11):

Series-acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd
  2017-01-24 16:29           ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd De Lara Guarch, Pablo
@ 2017-01-25 14:03             ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 42+ messages in thread
From: De Lara Guarch, Pablo @ 2017-01-25 14:03 UTC (permalink / raw)
  To: De Lara Guarch, Pablo, Zhang, Roy Fan, dev; +Cc: Doherty, Declan



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of De Lara Guarch,
> Pablo
> Sent: Tuesday, January 24, 2017 4:30 PM
> To: Zhang, Roy Fan; dev@dpdk.org
> Cc: Doherty, Declan
> Subject: Re: [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for
> scheduler crypto pmd
> 
> 
> 
> > -----Original Message-----
> > From: Zhang, Roy Fan
> > Sent: Tuesday, January 24, 2017 4:24 PM
> > To: dev@dpdk.org
> > Cc: Doherty, Declan; De Lara Guarch, Pablo
> > Subject: [PATCH v7 00/11] crypto/scheduler: add driver for scheduler
> > crypto pmd
> >
> ...
> 
> >
> > Fan Zhang (11):
> 
> Series-acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>

Applied to dpdk-next-crypto.
Thanks,

Pablo

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2017-01-25 14:03 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02 14:15 [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd Fan Zhang
2016-12-02 14:31 ` Thomas Monjalon
2016-12-02 14:57   ` Bruce Richardson
2016-12-02 16:22     ` Declan Doherty
2016-12-05 15:12       ` Neil Horman
2016-12-07 12:42         ` Declan Doherty
2016-12-07 14:16           ` Neil Horman
2016-12-07 14:46             ` Richardson, Bruce
2016-12-07 16:04               ` Declan Doherty
2016-12-08 14:57                 ` Neil Horman
2017-01-03 17:08 ` [dpdk-dev] [PATCH v2] " Fan Zhang
2017-01-03 17:16 ` [dpdk-dev] [PATCH v3] " Fan Zhang
2017-01-17 10:57   ` [dpdk-dev] [PATCH v4] " Fan Zhang
2017-01-17 13:19     ` [dpdk-dev] [PATCH v5] crypto/scheduler: " Fan Zhang
2017-01-17 14:09       ` Declan Doherty
2017-01-17 20:21         ` Thomas Monjalon
2017-01-24 16:06       ` [dpdk-dev] [PATCH v6 00/11] " Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 02/11] crypto/scheduler: add APIs for scheduler Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 03/11] crypto/scheduler: add internal structure declarations Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 04/11] crypto/scheduler: add scheduler API implementations Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 05/11] crypto/scheduler: add round-robin scheduling mode Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 06/11] crypto/scheduler: register scheduler vdev driver Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 07/11] crypto/scheduler: register operation function pointer table Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 09/11] crypto/scheduler: add scheduler PMD config options Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 10/11] app/test: add unit test for cryptodev scheduler PMD Fan Zhang
2017-01-24 16:06         ` [dpdk-dev] [PATCH v6 11/11] crypto/scheduler: add documentation Fan Zhang
2017-01-24 16:23         ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 01/11] cryptodev: add scheduler PMD name and type Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 02/11] crypto/scheduler: add APIs for scheduler Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 03/11] crypto/scheduler: add internal structure declarations Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 04/11] crypto/scheduler: add scheduler API implementations Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 05/11] crypto/scheduler: add round-robin scheduling mode Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 06/11] crypto/scheduler: register scheduler vdev driver Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 07/11] crypto/scheduler: register operation function pointer table Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 08/11] crypto/scheduler: add scheduler PMD to DPDK compile system Fan Zhang
2017-01-24 16:23           ` [dpdk-dev] [PATCH v7 09/11] crypto/scheduler: add scheduler PMD config options Fan Zhang
2017-01-24 16:24           ` [dpdk-dev] [PATCH v7 10/11] app/test: add unit test for cryptodev scheduler PMD Fan Zhang
2017-01-24 16:24           ` [dpdk-dev] [PATCH v7 11/11] crypto/scheduler: add documentation Fan Zhang
2017-01-24 16:29           ` [dpdk-dev] [PATCH v7 00/11] crypto/scheduler: add driver for scheduler crypto pmd De Lara Guarch, Pablo
2017-01-25 14:03             ` De Lara Guarch, Pablo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).