DPDK patches and discussions
 help / color / mirror / Atom feed
From: Amr Mokhtar <amr.mokhtar@intel.com>
To: dev@dpdk.org
Cc: thomas@monjalon.net, anatoly.burakov@intel.com,
	pablo.de.lara.guarch@intel.com, niall.power@intel.com,
	chris.macnamara@intel.com, Amr Mokhtar <amr.mokhtar@intel.com>
Subject: [dpdk-dev] [PATCH v2 1/5] bbdev: librte_bbdev library
Date: Wed, 18 Oct 2017 03:14:41 +0100	[thread overview]
Message-ID: <1508292886-31405-1-git-send-email-amr.mokhtar@intel.com> (raw)

- BBDEV library files
- BBDEV is tagged as EXPERIMENTAL
- Makefiles and configuration macros definition
- The bbdev framework and the 'null' driver are enabled by default
- The bbdev test framework is enabled by default
- Release Notes of the initial version and MAINTAINERS

Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
---
 MAINTAINERS                            |   10 +
 config/common_base                     |   23 +
 doc/guides/rel_notes/release_17_11.rst |   10 +
 lib/Makefile                           |    3 +
 lib/librte_bbdev/Makefile              |   56 ++
 lib/librte_bbdev/rte_bbdev.c           | 1095 ++++++++++++++++++++++++++++++++
 lib/librte_bbdev/rte_bbdev.h           |  741 +++++++++++++++++++++
 lib/librte_bbdev/rte_bbdev_op.h        |  514 +++++++++++++++
 lib/librte_bbdev/rte_bbdev_pci.h       |  288 +++++++++
 lib/librte_bbdev/rte_bbdev_pmd.h       |  223 +++++++
 lib/librte_bbdev/rte_bbdev_vdev.h      |  102 +++
 lib/librte_bbdev/rte_bbdev_version.map |   37 ++
 mk/rte.app.mk                          |   13 +
 13 files changed, 3115 insertions(+)
 create mode 100644 lib/librte_bbdev/Makefile
 create mode 100644 lib/librte_bbdev/rte_bbdev.c
 create mode 100644 lib/librte_bbdev/rte_bbdev.h
 create mode 100644 lib/librte_bbdev/rte_bbdev_op.h
 create mode 100644 lib/librte_bbdev/rte_bbdev_pci.h
 create mode 100644 lib/librte_bbdev/rte_bbdev_pmd.h
 create mode 100644 lib/librte_bbdev/rte_bbdev_vdev.h
 create mode 100644 lib/librte_bbdev/rte_bbdev_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 2a58378..df63f3f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -275,6 +275,16 @@ T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_rx_adapter*
 F: test/test/test_event_eth_rx_adapter.c
 
+BBDEV API - EXPERIMENTAL
+M: Amr Mokhtar <amr.mokhtar@intel.com>
+F: lib/librte_bbdev/
+F: drivers/bbdev/
+F: app/test-bbdev
+F: examples/bbdev_app/
+F: doc/guides/bbdevs/
+F: doc/guides/prog_guide/bbdev.rst
+F: doc/guides/sample_app_ug/bbdev_app.rst
+F: doc/guides/tools/testbbdev.rst
 
 Networking Drivers
 ------------------
diff --git a/config/common_base b/config/common_base
index d9471e8..7e75701 100644
--- a/config/common_base
+++ b/config/common_base
@@ -573,6 +573,24 @@ CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
 CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF_DEBUG=n
 
+# Compile generic wireless base band device library
+# EXPERIMENTAL: API may change without prior notice
+#
+CONFIG_RTE_LIBRTE_BBDEV=y
+CONFIG_RTE_LIBRTE_BBDEV_DEBUG=n
+CONFIG_RTE_BBDEV_MAX_DEVS=128
+CONFIG_RTE_BBDEV_NAME_MAX_LEN=64
+
+#
+# Compile PMD for NULL bbdev device
+#
+CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL=y
+
+#
+# Compile PMD for turbo software bbdev device
+#
+CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=n
+
 #
 # Compile librte_ring
 #
@@ -780,6 +798,11 @@ CONFIG_RTE_APP_TEST=y
 CONFIG_RTE_APP_TEST_RESOURCE_TAR=n
 
 #
+# Compile the bbdev test application
+#
+CONFIG_RTE_TEST_BBDEV=y
+
+#
 # Compile the PMD test application
 #
 CONFIG_RTE_TEST_PMD=y
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index 8db35f5..ab1c16b 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -41,6 +41,16 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added Wireless Base Band Device (bbdev).**
+
+  The Wireless Baseband library provides a common programming framework that
+  abstracts HW accelerators based on FPGA and/or Fixed Function Accelerators that
+  assist with 3gpp Physical Layer processing. Furthermore, it decouples the
+  application from the compute-intensive wireless functions by abstracting their
+  optimized libraries to appear as virtual bbdev devices.
+
+  The current release only supports Turbo Code FEC function.
+
 * **Extended port_id range from uint8_t to uint16_t.**
 
   Increased port_id range from 8 bits to 16 bits in order to support more than
diff --git a/lib/Makefile b/lib/Makefile
index 86d475f..3641eb5 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -52,6 +52,9 @@ DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
 DEPDIRS-librte_cryptodev += librte_kvargs
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
 DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
+DIRS-$(CONFIG_RTE_LIBRTE_BBDEV) += librte_bbdev
+DEPDIRS-librte_bbdev := librte_eal librte_mempool librte_ring librte_mbuf
+DEPDIRS-librte_bbdev += librte_kvargs
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DEPDIRS-librte_vhost := librte_eal librte_mempool librte_mbuf librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
diff --git a/lib/librte_bbdev/Makefile b/lib/librte_bbdev/Makefile
new file mode 100644
index 0000000..60c47e2
--- /dev/null
+++ b/lib/librte_bbdev/Makefile
@@ -0,0 +1,56 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_bbdev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_bbdev.c
+
+# export include files
+SYMLINK-y-include += rte_bbdev_op.h
+SYMLINK-y-include += rte_bbdev.h
+SYMLINK-y-include += rte_bbdev_pmd.h
+SYMLINK-y-include += rte_bbdev_pci.h
+SYMLINK-y-include += rte_bbdev_vdev.h
+
+# versioning export map
+EXPORT_MAP := rte_bbdev_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_bbdev/rte_bbdev.c b/lib/librte_bbdev/rte_bbdev.c
new file mode 100644
index 0000000..16f2544
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev.c
@@ -0,0 +1,1095 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_lcore.h>
+#include <rte_dev.h>
+#include <rte_spinlock.h>
+#include <rte_tailq.h>
+#include <rte_interrupts.h>
+
+#include "rte_bbdev_op.h"
+#include "rte_bbdev.h"
+#include "rte_bbdev_pmd.h"
+
+#define DEV_NAME "BBDEV"
+
+
+/* Helper macro to check dev_id is valid */
+#define VALID_DEV_OR_RET_ERR(dev, dev_id) do { \
+	if (dev == NULL) { \
+		rte_bbdev_log(ERR, "device %u is invalid", dev_id); \
+		return -ENODEV; \
+	} \
+} while (0)
+
+/* Helper macro to check dev_ops is valid */
+#define VALID_DEV_OPS_OR_RET_ERR(dev, dev_id) do { \
+	if (dev->dev_ops == NULL) { \
+		rte_bbdev_log(ERR, "NULL dev_ops structure in device %u", \
+				dev_id); \
+		return -ENODEV; \
+	} \
+} while (0)
+
+/* Helper macro to check that driver implements required function pointer */
+#define VALID_FUNC_OR_RET_ERR(func, dev_id) do { \
+	if (func == NULL) { \
+		rte_bbdev_log(ERR, "device %u does not support %s", \
+				dev_id, #func); \
+		return -ENOTSUP; \
+	} \
+} while (0)
+
+/* Helper macro to check that queue is valid */
+#define VALID_QUEUE_OR_RET_ERR(queue_id, dev) do { \
+	if (queue_id >= dev->data->num_queues) { \
+		rte_bbdev_log(ERR, "Invalid queue_id %u for device %u", \
+				queue_id, dev->data->dev_id); \
+		return -ERANGE; \
+	} \
+} while (0)
+
+/* List of callback functions registered by an application */
+struct rte_bbdev_callback {
+	TAILQ_ENTRY(rte_bbdev_callback) next;  /* Callbacks list */
+	rte_bbdev_cb_fn cb_fn;  /* Callback address */
+	void *cb_arg;  /* Parameter for callback */
+	void *ret_param;  /* Return parameter */
+	enum rte_bbdev_event_type event; /* Interrupt event type */
+	uint32_t active; /* Callback is executing */
+};
+
+/* spinlock for bbdev device callbacks */
+static rte_spinlock_t rte_bbdev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+/* Global array of all devices. This is not static because it's used by the
+ * inline enqueue and dequeue functions
+ */
+struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
+
+/* Global array with rte_bbdev_data structures */
+static struct rte_bbdev_data *rte_bbdev_data;
+
+/* Memzone name for global bbdev data pool */
+static const char *MZ_RTE_BBDEV_DATA = "rte_bbdev_data";
+
+/* Number of currently valid devices */
+static uint16_t num_devs;
+
+/* Return pointer to device structure, with validity check */
+static struct rte_bbdev *
+get_dev(uint16_t dev_id)
+{
+	if (rte_bbdev_is_valid(dev_id))
+		return &rte_bbdev_devices[dev_id];
+	return NULL;
+}
+
+/* Allocate global data array */
+static void
+rte_bbdev_data_alloc(void)
+{
+	const unsigned int flags = 0;
+	const struct rte_memzone *mz;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(MZ_RTE_BBDEV_DATA,
+				RTE_BBDEV_MAX_DEVS * sizeof(*rte_bbdev_data),
+				rte_socket_id(), flags);
+	} else
+		mz = rte_memzone_lookup(MZ_RTE_BBDEV_DATA);
+	if (mz == NULL)
+		rte_panic("Cannot allocate memzone for bbdev port data\n");
+
+	rte_bbdev_data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(rte_bbdev_data, 0,
+				RTE_BBDEV_MAX_DEVS * sizeof(*rte_bbdev_data));
+}
+
+/* Find lowest device id with no attached device */
+static uint16_t
+find_free_dev_id(void)
+{
+	uint16_t i;
+	for (i = 0; i < RTE_BBDEV_MAX_DEVS; i++) {
+		if (rte_bbdev_devices[i].state == RTE_BBDEV_UNUSED)
+			return i;
+	}
+	return RTE_BBDEV_MAX_DEVS;
+}
+
+struct rte_bbdev *
+rte_bbdev_allocate(const char *name)
+{
+	int ret;
+	struct rte_bbdev *bbdev;
+	uint16_t dev_id;
+
+	if (name == NULL) {
+		rte_bbdev_log(ERR, "Invalid null device name");
+		return NULL;
+	}
+
+	if (rte_bbdev_get_named_dev(name) != NULL) {
+		rte_bbdev_log(ERR, "Device \"%s\" is already allocated", name);
+		return NULL;
+	}
+
+	dev_id = find_free_dev_id();
+	if (dev_id == RTE_BBDEV_MAX_DEVS) {
+		rte_bbdev_log(ERR, "Reached maximum number of devices");
+		return NULL;
+	}
+
+	bbdev = &rte_bbdev_devices[dev_id];
+
+	if (rte_bbdev_data == NULL)
+		rte_bbdev_data_alloc();
+
+	bbdev->data = &rte_bbdev_data[dev_id];
+	memset(bbdev->data, 0, sizeof(*bbdev->data));
+
+	bbdev->data->dev_id = dev_id;
+	bbdev->state = RTE_BBDEV_INITALIZED;
+
+	ret = snprintf(bbdev->data->name, RTE_BBDEV_NAME_MAX_LEN, "%s", name);
+	if ((ret < 0) || (ret >= RTE_BBDEV_NAME_MAX_LEN)) {
+		rte_bbdev_log(ERR, "Copying device name \"%s\" failed", name);
+		return NULL;
+	}
+
+	/* init user callbacks */
+	TAILQ_INIT(&(bbdev->list_cbs));
+
+	num_devs++;
+
+	rte_bbdev_log_debug("Initialised device %s (id = %u). Num devices = %u",
+			name, dev_id, num_devs);
+
+	return bbdev;
+}
+
+int
+rte_bbdev_release(struct rte_bbdev *bbdev)
+{
+	uint16_t dev_id;
+	struct rte_bbdev_callback *cb, *next;
+
+	if (bbdev == NULL) {
+		rte_bbdev_log(ERR, "NULL bbdev");
+		return -ENODEV;
+	}
+	dev_id = bbdev->data->dev_id;
+
+	/* free all callbacks from the device's list */
+	for (cb = TAILQ_FIRST(&bbdev->list_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+		TAILQ_REMOVE(&(bbdev->list_cbs), cb, next);
+		rte_free(cb);
+	}
+
+	memset(bbdev, 0, sizeof(*bbdev));
+	num_devs--;
+	bbdev->state = RTE_BBDEV_UNUSED;
+
+	rte_bbdev_log_debug(
+			"Un-initialised device id = %u. Num devices = %u",
+			dev_id, num_devs);
+	return 0;
+}
+
+struct rte_bbdev *
+rte_bbdev_get_named_dev(const char *name)
+{
+	unsigned int i;
+
+	if (name == NULL) {
+		rte_bbdev_log(ERR, "NULL driver name");
+		return NULL;
+	}
+
+	for (i = 0; i < RTE_BBDEV_MAX_DEVS; i++) {
+		struct rte_bbdev *dev = get_dev(i);
+		if (dev && (strncmp(dev->data->name,
+				name, RTE_BBDEV_NAME_MAX_LEN) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+uint16_t
+rte_bbdev_count(void)
+{
+	return num_devs;
+}
+
+bool
+rte_bbdev_is_valid(uint16_t dev_id)
+{
+	if ((dev_id < RTE_BBDEV_MAX_DEVS) &&
+			rte_bbdev_devices[dev_id].state == RTE_BBDEV_INITALIZED)
+		return true;
+	return false;
+}
+
+uint16_t
+rte_bbdev_find_next(uint16_t dev_id)
+{
+	dev_id++;
+	for (; dev_id < RTE_BBDEV_MAX_DEVS; dev_id++)
+		if (rte_bbdev_is_valid(dev_id))
+			break;
+	return dev_id;
+}
+
+int
+rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id)
+{
+	unsigned int i;
+	int ret;
+	struct rte_bbdev_driver_info dev_info;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (dev->data->started) {
+		rte_bbdev_log(ERR,
+				"Device %u cannot be configured when started",
+				dev_id);
+		return -EBUSY;
+	}
+
+	/* Get device driver information to get max number of queues */
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->info_get, dev_id);
+	memset(&dev_info, 0, sizeof(dev_info));
+	dev->dev_ops->info_get(dev, &dev_info);
+
+	if ((num_queues == 0) || (num_queues > dev_info.max_num_queues)) {
+		rte_bbdev_log(ERR,
+				"Device %u supports 0 < N <= %u queues, not %u",
+				dev_id, dev_info.max_num_queues, num_queues);
+		return -EINVAL;
+	}
+
+	/* If re-configuration, get driver to free existing internal memory */
+	if (dev->data->queues != NULL) {
+		VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_release, dev_id);
+		for (i = 0; i < dev->data->num_queues; i++) {
+			int ret = dev->dev_ops->queue_release(dev, i);
+			if (ret < 0) {
+				rte_bbdev_log(ERR,
+						"Device %u queue %u release failed",
+						dev_id, i);
+				return ret;
+			}
+		}
+		/* Call optional device close */
+		if (dev->dev_ops->close) {
+			ret = dev->dev_ops->close(dev);
+			if (ret < 0) {
+				rte_bbdev_log(ERR,
+						"Device %u couldn't be closed",
+						dev_id);
+				return ret;
+			}
+		}
+		rte_free(dev->data->queues);
+	}
+
+	/* Allocate queue pointers */
+	dev->data->queues = rte_calloc_socket(DEV_NAME, num_queues,
+			sizeof(dev->data->queues[0]), RTE_CACHE_LINE_SIZE,
+				dev->data->socket_id);
+	if (dev->data->queues == NULL) {
+		rte_bbdev_log(ERR,
+				"calloc of %u queues for device %u on socket %i failed",
+				num_queues, dev_id, dev->data->socket_id);
+		return -ENOMEM;
+	}
+
+	dev->data->num_queues = num_queues;
+
+	/* Call optional device configuration */
+	if (dev->dev_ops->setup_queues) {
+		ret = dev->dev_ops->setup_queues(dev, num_queues, socket_id);
+		if (ret < 0) {
+			rte_bbdev_log(ERR,
+					"Device %u memory configuration failed",
+					dev_id);
+			goto error;
+		}
+	}
+
+	rte_bbdev_log_debug("Device %u set up with %u queues", dev_id,
+			num_queues);
+	return 0;
+
+error:
+	dev->data->num_queues = 0;
+	rte_free(dev->data->queues);
+	dev->data->queues = NULL;
+	return ret;
+}
+
+int
+rte_bbdev_intr_enable(uint16_t dev_id)
+{
+	int ret;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (dev->data->started) {
+		rte_bbdev_log(ERR,
+				"Device %u cannot be configured when started",
+				dev_id);
+		return -EBUSY;
+	}
+
+	if (dev->dev_ops->intr_enable) {
+		ret = dev->dev_ops->intr_enable(dev);
+		if (ret < 0) {
+			rte_bbdev_log(ERR,
+					"Device %u interrupts configuration failed",
+					dev_id);
+			return ret;
+		}
+		rte_bbdev_log_debug("Enabled interrupts for dev %u", dev_id);
+		return 0;
+	}
+
+	rte_bbdev_log(ERR, "Device %u doesn't support interrupts", dev_id);
+	return -ENOTSUP;
+}
+
+int
+rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
+		const struct rte_bbdev_queue_conf *conf)
+{
+	int ret = 0;
+	struct rte_bbdev_driver_info dev_info;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	const struct rte_bbdev_op_cap *p;
+	struct rte_bbdev_queue_conf *stored_conf;
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+	if (dev->data->queues[queue_id].started || dev->data->started) {
+		rte_bbdev_log(ERR,
+				"Queue %u of device %u cannot be configured when started",
+				queue_id, dev_id);
+		return -EBUSY;
+	}
+
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_release, dev_id);
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_setup, dev_id);
+
+	/* Get device driver information to verify config is valid */
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->info_get, dev_id);
+	memset(&dev_info, 0, sizeof(dev_info));
+	dev->dev_ops->info_get(dev, &dev_info);
+
+	/* Check configuration is valid */
+	if (conf != NULL) {
+		if ((conf->op_type == RTE_BBDEV_OP_NONE) &&
+				(dev_info.capabilities[0].type ==
+				RTE_BBDEV_OP_NONE)) {
+			ret = 1;
+		} else
+			for (p = dev_info.capabilities;
+					p->type != RTE_BBDEV_OP_NONE; p++) {
+				if (conf->op_type == p->type) {
+					ret = 1;
+					break;
+				}
+			}
+		if (ret == 0) {
+			rte_bbdev_log(ERR, "Invalid operation type");
+			return -EINVAL;
+		}
+		if (conf->queue_size > dev_info.queue_size_lim) {
+			rte_bbdev_log(ERR,
+					"Size (%u) of queue %u of device %u must be: <= %u",
+					conf->queue_size, queue_id, dev_id,
+					dev_info.queue_size_lim);
+			return -EINVAL;
+		}
+		if (!rte_is_power_of_2(conf->queue_size)) {
+			rte_bbdev_log(ERR,
+					"Size (%u) of queue %u of device %u must be a power of 2",
+					conf->queue_size, queue_id, dev_id);
+			return -EINVAL;
+		}
+		if (conf->priority > dev_info.max_queue_priority) {
+			rte_bbdev_log(ERR,
+					"Priority (%u) of queue %u of bdev %u must be <= %u",
+					conf->priority, queue_id, dev_id,
+					dev_info.max_queue_priority);
+			return -EINVAL;
+		}
+	}
+
+	/* Release existing queue (in case of queue reconfiguration) */
+	if (dev->data->queues[queue_id].queue_private != NULL) {
+		ret = dev->dev_ops->queue_release(dev, queue_id);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u queue %u release failed",
+					dev_id, queue_id);
+			return ret;
+		}
+	}
+
+	/* Get driver to setup the queue */
+	ret = dev->dev_ops->queue_setup(dev, queue_id, (conf != NULL) ?
+			conf : &dev_info.default_queue_conf);
+	if (ret < 0) {
+		rte_bbdev_log(ERR,
+				"Device %u queue %u setup failed", dev_id,
+				queue_id);
+		return ret;
+	}
+
+	/* Store configuration */
+	stored_conf = &dev->data->queues[queue_id].conf;
+	memcpy(stored_conf,
+			(conf != NULL) ? conf : &dev_info.default_queue_conf,
+			sizeof(*stored_conf));
+
+	rte_bbdev_log_debug("Configured dev%uq%u (size=%u, type=%s, prio=%u)",
+			dev_id, queue_id, stored_conf->queue_size,
+			rte_bbdev_op_type_str(stored_conf->op_type),
+			stored_conf->priority);
+
+	return 0;
+}
+
+int
+rte_bbdev_start(uint16_t dev_id)
+{
+	int i;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (dev->data->started) {
+		rte_bbdev_log_debug("Device %u is already started", dev_id);
+		return 0;
+	}
+
+	if (dev->dev_ops->start) {
+		int ret = dev->dev_ops->start(dev);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u start failed", dev_id);
+			return ret;
+		}
+	}
+
+	/* Store new state */
+	for (i = 0; i < dev->data->num_queues; i++)
+		if (!dev->data->queues[i].conf.deferred_start)
+			dev->data->queues[i].started = true;
+	dev->data->started = true;
+
+	rte_bbdev_log_debug("Started device %u", dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_stop(uint16_t dev_id)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (!dev->data->started) {
+		rte_bbdev_log_debug("Device %u is already stopped", dev_id);
+		return 0;
+	}
+
+	if (dev->dev_ops->stop)
+		dev->dev_ops->stop(dev);
+	dev->data->started = false;
+
+	rte_bbdev_log_debug("Stopped device %u", dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_close(uint16_t dev_id)
+{
+	int ret;
+	uint16_t i;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (dev->data->started) {
+		ret = rte_bbdev_stop(dev_id);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u stop failed", dev_id);
+			return ret;
+		}
+	}
+
+	/* Free memory used by queues */
+	for (i = 0; i < dev->data->num_queues; i++) {
+		ret = dev->dev_ops->queue_release(dev, i);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u queue %u release failed",
+					dev_id, i);
+			return ret;
+		}
+	}
+	rte_free(dev->data->queues);
+
+	if (dev->dev_ops->close) {
+		ret = dev->dev_ops->close(dev);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u close failed", dev_id);
+			return ret;
+		}
+	}
+
+	/* Clear configuration */
+	dev->data->queues = NULL;
+	dev->data->num_queues = 0;
+
+	rte_bbdev_log_debug("Closed device %u", dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+	if (dev->data->queues[queue_id].started) {
+		rte_bbdev_log_debug("Queue %u of device %u already started",
+				queue_id, dev_id);
+		return 0;
+	}
+
+	if (dev->dev_ops->queue_start) {
+		int ret = dev->dev_ops->queue_start(dev, queue_id);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u queue %u start failed",
+					dev_id, queue_id);
+			return ret;
+		}
+	}
+	dev->data->queues[queue_id].started = true;
+
+	rte_bbdev_log_debug("Started queue %u of device %u", queue_id, dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+	if (!dev->data->queues[queue_id].started) {
+		rte_bbdev_log_debug("Queue %u of device %u already stopped",
+				queue_id, dev_id);
+		return 0;
+	}
+
+	if (dev->dev_ops->queue_stop) {
+		int ret = dev->dev_ops->queue_stop(dev, queue_id);
+		if (ret < 0) {
+			rte_bbdev_log(ERR, "Device %u queue %u stop failed",
+					dev_id, queue_id);
+			return ret;
+		}
+	}
+	dev->data->queues[queue_id].started = false;
+
+	rte_bbdev_log_debug("Stopped queue %u of device %u", queue_id, dev_id);
+	return 0;
+}
+
+/* Get device statistics */
+static void
+get_stats_from_queues(struct rte_bbdev *dev, struct rte_bbdev_stats *stats)
+{
+	unsigned int q_id;
+	for (q_id = 0; q_id < dev->data->num_queues; q_id++) {
+		struct rte_bbdev_stats *q_stats =
+				&dev->data->queues[q_id].queue_stats;
+
+		stats->enqueued_count += q_stats->enqueued_count;
+		stats->dequeued_count += q_stats->dequeued_count;
+		stats->enqueue_err_count += q_stats->enqueue_err_count;
+		stats->dequeue_err_count += q_stats->dequeue_err_count;
+	}
+	rte_bbdev_log_debug("Got stats on %u", dev->data->dev_id);
+}
+
+static void
+reset_stats_in_queues(struct rte_bbdev *dev)
+{
+	unsigned int q_id;
+	for (q_id = 0; q_id < dev->data->num_queues; q_id++) {
+		struct rte_bbdev_stats *q_stats =
+				&dev->data->queues[q_id].queue_stats;
+
+		memset(q_stats, 0, sizeof(*q_stats));
+	}
+	rte_bbdev_log_debug("Reset stats on %u", dev->data->dev_id);
+}
+
+int
+rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (stats == NULL) {
+		rte_bbdev_log(ERR, "NULL stats structure");
+		return -EINVAL;
+	}
+
+	memset(stats, 0, sizeof(*stats));
+	if (dev->dev_ops->stats_get != NULL)
+		dev->dev_ops->stats_get(dev, stats);
+	else
+		get_stats_from_queues(dev, stats);
+
+	rte_bbdev_log_debug("Retrieved stats of device %u", dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_stats_reset(uint16_t dev_id)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+	if (dev->dev_ops->stats_reset != NULL)
+		dev->dev_ops->stats_reset(dev);
+	else
+		reset_stats_in_queues(dev);
+
+	rte_bbdev_log_debug("Reset stats of device %u", dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->info_get, dev_id);
+
+	if (dev_info == NULL) {
+		rte_bbdev_log(ERR, "NULL dev info structure");
+		return -EINVAL;
+	}
+
+	/* Copy data maintained by device interface layer */
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->dev_name = dev->data->name;
+	dev_info->num_queues = dev->data->num_queues;
+	dev_info->bus = rte_bus_find_by_device(dev->device);
+	dev_info->socket_id = dev->data->socket_id;
+	dev_info->started = dev->data->started;
+
+	/* Copy data maintained by device driver layer */
+	dev->dev_ops->info_get(dev, &dev_info->drv);
+
+	rte_bbdev_log_debug("Retrieved info of device %u", dev_id);
+	return 0;
+}
+
+int
+rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
+		struct rte_bbdev_queue_info *dev_info)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+	if (dev_info == NULL) {
+		rte_bbdev_log(ERR, "NULL queue info structure");
+		return -EINVAL;
+	}
+
+	/* Copy data to output */
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->conf = dev->data->queues[queue_id].conf;
+	dev_info->started = dev->data->queues[queue_id].started;
+
+	rte_bbdev_log_debug("Retrieved info of queue %u of device %u",
+			queue_id, dev_id);
+	return 0;
+}
+
+/* Calculate size needed to store bbdev_op, depending on type */
+static unsigned int
+get_bbdev_op_size(enum rte_bbdev_op_type type)
+{
+	unsigned int result = 0;
+	switch (type) {
+	case RTE_BBDEV_OP_NONE:
+		result = RTE_MAX(sizeof(struct rte_bbdev_dec_op),
+				sizeof(struct rte_bbdev_enc_op));
+		break;
+	case RTE_BBDEV_OP_TURBO_DEC:
+		result = sizeof(struct rte_bbdev_dec_op);
+		break;
+	case RTE_BBDEV_OP_TURBO_ENC:
+		result = sizeof(struct rte_bbdev_enc_op);
+		break;
+	default:
+		break;
+	}
+
+	return result;
+}
+
+/* Initialise a bbdev_op structure */
+static void
+bbdev_op_init(struct rte_mempool *mempool, void *arg, void *element,
+		__rte_unused unsigned int n)
+{
+	enum rte_bbdev_op_type type = *(enum rte_bbdev_op_type *)arg;
+
+	if (type == RTE_BBDEV_OP_TURBO_DEC) {
+		struct rte_bbdev_dec_op *op = element;
+		memset(op, 0, mempool->elt_size);
+		op->mempool = mempool;
+	} else if (type == RTE_BBDEV_OP_TURBO_ENC) {
+		struct rte_bbdev_enc_op *op = element;
+		memset(op, 0, mempool->elt_size);
+		op->mempool = mempool;
+	}
+}
+
+struct rte_mempool *
+rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
+		unsigned int num_elements, unsigned int cache_size,
+		int socket_id)
+{
+	struct rte_bbdev_op_pool_private *priv;
+	struct rte_mempool *mp;
+
+	if (name == NULL) {
+		rte_bbdev_log(ERR, "NULL name for op pool");
+		return NULL;
+	}
+
+	if (type >= RTE_BBDEV_OP_TYPE_COUNT) {
+		rte_bbdev_log(ERR,
+				"Invalid op type (%u), should be less than %u",
+				type, RTE_BBDEV_OP_TYPE_COUNT);
+		return NULL;
+	}
+
+	mp = rte_mempool_create(name, num_elements, get_bbdev_op_size(type),
+			cache_size, sizeof(struct rte_bbdev_op_pool_private),
+			NULL, NULL, bbdev_op_init, &type, socket_id, 0);
+	if (mp == NULL) {
+		rte_bbdev_log(ERR,
+				"Failed to create op pool %s (num ops=%u, op size=%u) with error: %s",
+				name, num_elements, get_bbdev_op_size(type),
+				rte_strerror(rte_errno));
+		return NULL;
+	}
+
+	rte_bbdev_log_debug(
+			"Op pool %s created for %u ops (type=%s, cache=%u, socket=%u, size=%u)",
+			name, num_elements, rte_bbdev_op_type_str(type),
+			cache_size, socket_id, get_bbdev_op_size(type));
+
+	priv = (struct rte_bbdev_op_pool_private *)rte_mempool_get_priv(mp);
+	priv->type = type;
+
+	return mp;
+}
+
+int
+rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
+		rte_bbdev_cb_fn cb_fn, void *cb_arg)
+{
+	struct rte_bbdev_callback *user_cb;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	if (event >= RTE_BBDEV_EVENT_MAX) {
+		rte_bbdev_log(ERR,
+				"Invalid event type (%u), should be less than %u",
+				event, RTE_BBDEV_EVENT_MAX);
+		return -EINVAL;
+	}
+
+	if (cb_fn == NULL) {
+		rte_bbdev_log(ERR, "NULL callback function");
+		return -EINVAL;
+	}
+
+	rte_spinlock_lock(&rte_bbdev_cb_lock);
+
+	TAILQ_FOREACH(user_cb, &(dev->list_cbs), next) {
+		if (user_cb->cb_fn == cb_fn &&
+				user_cb->cb_arg == cb_arg &&
+				user_cb->event == event)
+			break;
+	}
+
+	/* create a new callback. */
+	if (user_cb == NULL) {
+		user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+				sizeof(struct rte_bbdev_callback), 0);
+		if (user_cb != NULL) {
+			user_cb->cb_fn = cb_fn;
+			user_cb->cb_arg = cb_arg;
+			user_cb->event = event;
+			TAILQ_INSERT_TAIL(&(dev->list_cbs), user_cb, next);
+		}
+	}
+
+	rte_spinlock_unlock(&rte_bbdev_cb_lock);
+	return (user_cb == NULL) ? -ENOMEM : 0;
+}
+
+int
+rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
+		rte_bbdev_cb_fn cb_fn, void *cb_arg)
+{
+	int ret = 0;
+	struct rte_bbdev_callback *cb, *next;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+	if (event >= RTE_BBDEV_EVENT_MAX) {
+		rte_bbdev_log(ERR,
+				"Invalid event type (%u), should be less than %u",
+				event, RTE_BBDEV_EVENT_MAX);
+		return -EINVAL;
+	}
+
+	if (cb_fn == NULL) {
+		rte_bbdev_log(ERR,
+				"NULL callback function cannot be unregistered");
+		return -EINVAL;
+	}
+
+	dev = &rte_bbdev_devices[dev_id];
+	rte_spinlock_lock(&rte_bbdev_cb_lock);
+
+	for (cb = TAILQ_FIRST(&dev->list_cbs); cb != NULL; cb = next) {
+
+		next = TAILQ_NEXT(cb, next);
+
+		if (cb->cb_fn != cb_fn || cb->event != event ||
+				(cb_arg != (void *)-1 && cb->cb_arg != cb_arg))
+			continue;
+
+		/* If this callback is not executing right now, remove it. */
+		if (cb->active == 0) {
+			TAILQ_REMOVE(&(dev->list_cbs), cb, next);
+			rte_free(cb);
+		} else
+			ret = -EAGAIN;
+	}
+
+	rte_spinlock_unlock(&rte_bbdev_cb_lock);
+	return ret;
+}
+
+void
+rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
+	enum rte_bbdev_event_type event, void *ret_param)
+{
+	struct rte_bbdev_callback *cb_lst;
+	struct rte_bbdev_callback dev_cb;
+
+	if (dev == NULL) {
+		rte_bbdev_log(ERR, "NULL device");
+		return;
+	}
+
+	if (dev->data == NULL) {
+		rte_bbdev_log(ERR, "NULL data structure");
+		return;
+	}
+
+	if (event >= RTE_BBDEV_EVENT_MAX) {
+		rte_bbdev_log(ERR,
+				"Invalid event type (%u), should be less than %u",
+				event, RTE_BBDEV_EVENT_MAX);
+		return;
+	}
+
+	rte_spinlock_lock(&rte_bbdev_cb_lock);
+	TAILQ_FOREACH(cb_lst, &(dev->list_cbs), next) {
+		if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+			continue;
+		dev_cb = *cb_lst;
+		cb_lst->active = 1;
+		if (ret_param != NULL)
+			dev_cb.ret_param = ret_param;
+
+		rte_spinlock_unlock(&rte_bbdev_cb_lock);
+		dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+				dev_cb.cb_arg, dev_cb.ret_param);
+		rte_spinlock_lock(&rte_bbdev_cb_lock);
+		cb_lst->active = 0;
+	}
+	rte_spinlock_unlock(&rte_bbdev_cb_lock);
+}
+
+int
+rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_intr_enable, dev_id);
+	return dev->dev_ops->queue_intr_enable(dev, queue_id);
+}
+
+int
+rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id)
+{
+	struct rte_bbdev *dev = get_dev(dev_id);
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+	VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+	VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_intr_disable, dev_id);
+	return dev->dev_ops->queue_intr_disable(dev, queue_id);
+}
+
+int
+rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
+		void *data)
+{
+	uint32_t vec;
+	struct rte_bbdev *dev = get_dev(dev_id);
+	struct rte_intr_handle *intr_handle;
+	int ret;
+
+	VALID_DEV_OR_RET_ERR(dev, dev_id);
+	VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+	intr_handle = dev->intr_handle;
+	if (!intr_handle || !intr_handle->intr_vec) {
+		rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
+		return -ENOTSUP;
+	}
+
+	if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) {
+		rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n",
+				dev_id, queue_id);
+		return -ENOTSUP;
+	}
+
+	vec = intr_handle->intr_vec[queue_id];
+	ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
+	if (ret && (ret != -EEXIST)) {
+		rte_bbdev_log(ERR,
+				"dev %u q %u int ctl error op %d epfd %d vec %u\n",
+				dev_id, queue_id, op, epfd, vec);
+		return ret;
+	}
+
+	return 0;
+}
+
+
+const char *
+rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type)
+{
+	static const char * const op_types[] = {
+		"RTE_BBDEV_OP_NONE",
+		"RTE_BBDEV_OP_TURBO_DEC",
+		"RTE_BBDEV_OP_TURBO_ENC",
+	};
+
+	if (op_type < RTE_BBDEV_OP_TYPE_COUNT)
+		return op_types[op_type];
+
+	rte_bbdev_log(ERR, "Invalid operation type");
+	return "";
+}
+
+
+int bbdev_logtype;
+
+RTE_INIT(rte_bbdev_init_log);
+static void
+rte_bbdev_init_log(void)
+{
+	bbdev_logtype = rte_log_register("lib.bbdev");
+	if (bbdev_logtype >= 0)
+		rte_log_set_level(bbdev_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_bbdev/rte_bbdev.h b/lib/librte_bbdev/rte_bbdev.h
new file mode 100644
index 0000000..9e6d283
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev.h
@@ -0,0 +1,741 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_BBDEV_H_
+#define _RTE_BBDEV_H_
+
+/**
+ * @file rte_bbdev.h
+ *
+ * Wireless base band device application APIs.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API allows an application to discover, configure and use a device to
+ * process operations. An asynchronous API (enqueue, followed by later dequeue)
+ * is used for processing operations.
+ *
+ * The functions in this API are not thread-safe when called on the same
+ * target object (a device, or a queue on a device), with the exception that
+ * one thread can enqueue operations to a queue while another thread dequeues
+ * from the same queue.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_bus.h>
+#include <rte_cpuflags.h>
+#include <rte_memory.h>
+
+#include "rte_bbdev_op.h"
+
+#ifndef RTE_BBDEV_MAX_DEVS
+#define RTE_BBDEV_MAX_DEVS 128  /**< Max number of devices */
+#endif
+
+/** Flags indiciate current state of BBDEV device */
+enum rte_bbdev_state {
+	RTE_BBDEV_UNUSED,
+	RTE_BBDEV_INITALIZED
+};
+
+/**
+ * Get the total number of devices that have been successfully initialised.
+ *
+ * @return
+ *   The total number of usable devices.
+ */
+uint16_t
+rte_bbdev_count(void);
+
+/**
+ * Check if a device is valid.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @return
+ *   true if device ID is valid and device is attached, false otherwise.
+ */
+bool
+rte_bbdev_is_valid(uint16_t dev_id);
+
+/**
+ * Get the next enabled device.
+ *
+ * @param dev_id
+ *   The current device
+ *
+ * @return
+ *   - The next device, or
+ *   - RTE_BBDEV_MAX_DEVS if none found
+ */
+uint16_t
+rte_bbdev_find_next(uint16_t dev_id);
+
+/** Iterate through all enabled devices */
+#define RTE_BBDEV_FOREACH(i) for (i = rte_bbdev_find_next(-1); \
+		i < RTE_BBDEV_MAX_DEVS; \
+		i = rte_bbdev_find_next(i))
+
+/**
+ * Setup up device queues.
+ * This function must be called on a device before setting up the queues and
+ * starting the device. It can also be called when a device is in the stopped
+ * state. If any device queues have been configured their configuration will be
+ * cleared by a call to this function.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param num_queues
+ *   Number of queues to configure on device.
+ * @param socket_id
+ *   ID of a socket which will be used to allocate memory.
+ *
+ * @return
+ *   - 0 on success
+ *   - -ENODEV if dev_id is invalid or the device is corrupted
+ *   - -EINVAL if num_queues is invalid, 0 or greater than maximum
+ *   - -EBUSY if the identified device has already started
+ *   - -ENOMEM if unable to allocate memory
+ */
+int
+rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id);
+
+/**
+ * Enable interrupts.
+ * This function may be called before starting the device to enable the
+ * interrupts if they are available.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @return
+ *   - 0 on success
+ *   - -ENODEV if dev_id is invalid or the device is corrupted
+ *   - -EBUSY if the identified device has already started
+ *   - -ENOTSUP if the interrupts are not supported by the device
+ */
+int
+rte_bbdev_intr_enable(uint16_t dev_id);
+
+/** Device queue configuration structure */
+struct rte_bbdev_queue_conf {
+	int socket;  /**< NUMA socket used for memory allocation */
+	uint32_t queue_size;  /**< Size of queue */
+	uint8_t priority;  /**< Queue priority */
+	bool deferred_start; /**< Do not start queue when device is started. */
+	enum rte_bbdev_op_type op_type; /**< Operation type */
+};
+
+/**
+ * Configure a queue on a device.
+ * This function can be called after device configuration, and before starting.
+ * It can also be called when the device or the queue is in the stopped state.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ * @param conf
+ *   The queue configuration. If NULL, a default configuration will be used.
+ *
+ * @return
+ *   - 0 on success
+ *   - EINVAL if the identified queue size or priority are invalid
+ *   - EBUSY if the identified queue or its device have already started
+ */
+int
+rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
+		const struct rte_bbdev_queue_conf *conf);
+
+/**
+ * Start a device.
+ * This is the last step needed before enqueueing operations is possible.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @return
+ *   - 0 on success
+ *   - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_start(uint16_t dev_id);
+
+/**
+ * Stop a device.
+ * The device can be reconfigured, and restarted after being stopped.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @return
+ *   - 0 on success
+ */
+int
+rte_bbdev_stop(uint16_t dev_id);
+
+/**
+ * Close a device.
+ * The device cannot be restarted without reconfiguration!
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @return
+ *   - 0 on success
+ */
+int
+rte_bbdev_close(uint16_t dev_id);
+
+/**
+ * Start a specified queue on a device.
+ * This is only needed if the queue has been stopped, or if the deferred_start
+ * flag has been set when configuring the queue.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ *
+ * @return
+ *   - 0 on success
+ *   - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
+
+/**
+ * Stop a specified queue on a device, to allow re configuration.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ *
+ * @return
+ *   - 0 on success
+ *   - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
+
+/** Device statistics. */
+struct rte_bbdev_stats {
+	uint64_t enqueued_count;  /**< Count of all operations enqueued */
+	uint64_t dequeued_count;  /**< Count of all operations dequeued */
+	/** Total error count on operations enqueued */
+	uint64_t enqueue_err_count;
+	/** Total error count on operations dequeued */
+	uint64_t dequeue_err_count;
+};
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param stats
+ *   Pointer to structure to where statistics will be copied. On error, this
+ *   location may or may not have been modified.
+ *
+ * @return
+ *   - 0 on success
+ *   - EINVAL if invalid parameter pointer is provided
+ */
+int
+rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats);
+
+/**
+ * Reset the statistics of a device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   - 0 on success
+ */
+int
+rte_bbdev_stats_reset(uint16_t dev_id);
+
+/** Device information supplied by the device's driver */
+struct rte_bbdev_driver_info {
+	/** Driver name */
+	const char *driver_name;
+
+	/** Maximum number of queues supported by the device */
+	unsigned int max_num_queues;
+	/** Queue size limit (queue size must also be power of 2) */
+	uint32_t queue_size_lim;
+	/** Set if device off-loads operation to hardware  */
+	bool hardware_accelerated;
+	/** Max value supported by queue priority */
+	uint8_t max_queue_priority;
+	/** Set if device supports per-queue interrupts */
+	bool queue_intr_supported;
+	/** Minimum alignment of buffers, in bytes */
+	uint16_t min_alignment;
+	/** Default queue configuration used if none is supplied  */
+	struct rte_bbdev_queue_conf default_queue_conf;
+	/** Device operation capabilities */
+	const struct rte_bbdev_op_cap *capabilities;
+	/** Device cpu_flag requirements */
+	const enum rte_cpu_flag_t *cpu_flag_reqs;
+};
+
+/** Macro used at end of bbdev PMD list */
+#define RTE_BBDEV_END_OF_CAPABILITIES_LIST() \
+	{ RTE_BBDEV_OP_NONE }
+
+/** Device information structure used by an application to discover a devices
+ * capabilities and current configuration
+ */
+struct rte_bbdev_info {
+	int socket_id;  /**< NUMA socket that device is on */
+	const char *dev_name;  /**< Unique device name */
+	const struct rte_bus *bus;  /**< Bus information */
+	uint16_t num_queues;  /**< Number of queues currently configured */
+	bool started;  /**< Set if device is currently started */
+	struct rte_bbdev_driver_info drv;  /**< Info from device driver */
+};
+
+/**
+ * Retrieve information about a device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param dev_info
+ *   Pointer to structure to where information will be copied. On error, this
+ *   location may or may not have been modified.
+ *
+ * @return
+ *   - 0 on success
+ *   - EINVAL if invalid parameter pointer is provided
+ */
+int
+rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info);
+
+/** Queue information */
+struct rte_bbdev_queue_info {
+	/** Current device configuration */
+	struct rte_bbdev_queue_conf conf;
+	/** Set if queue is currently started */
+	bool started;
+};
+
+/**
+ * Retrieve information about a specific queue on a device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ * @param dev_info
+ *   Pointer to structure to where information will be copied. On error, this
+ *   location may or may not have been modified.
+ *
+ * @return
+ *   - 0 on success
+ *   - EINVAL if invalid parameter pointer is provided
+ */
+int
+rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
+		struct rte_bbdev_queue_info *dev_info);
+
+/** @internal The data structure associated with each queue of a device. */
+struct rte_bbdev_queue_data {
+	void *queue_private;  /**< Driver-specific per-queue data */
+	struct rte_bbdev_queue_conf conf;  /**< Current configuration */
+	struct rte_bbdev_stats queue_stats;  /**< Queue statistics */
+	bool started;  /**< Queue state */
+};
+
+/** @internal Enqueue encode operations for processing on queue of a device. */
+typedef uint16_t (*rte_bbdev_enqueue_enc_ops_t)(
+		struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops,
+		uint16_t num);
+
+/** @internal Enqueue decode operations for processing on queue of a device. */
+typedef uint16_t (*rte_bbdev_enqueue_dec_ops_t)(
+		struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops,
+		uint16_t num);
+
+/** @internal Dequeue encode operations from a queue of a device. */
+typedef uint16_t (*rte_bbdev_dequeue_enc_ops_t)(
+		struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num);
+
+/** @internal Dequeue decode operations from a queue of a device. */
+typedef uint16_t (*rte_bbdev_dequeue_dec_ops_t)(
+		struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num);
+
+#ifndef RTE_BBDEV_NAME_MAX_LEN
+#define RTE_BBDEV_NAME_MAX_LEN  64  /**< Max length of device name */
+#endif
+
+/**
+ * @internal The data associated with a device, with no function pointers.
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration. Drivers can access
+ * these fields, but should never write to them!
+ */
+struct rte_bbdev_data {
+	char name[RTE_BBDEV_NAME_MAX_LEN]; /**< Unique identifier name */
+	void *dev_private;  /**< Driver-specific private data */
+	uint16_t num_queues;  /**< Number of currently configured queues */
+	struct rte_bbdev_queue_data *queues;  /**< Queue structures */
+	uint16_t dev_id;  /**< Device ID */
+	int socket_id;  /**< NUMA socket that device is on */
+	bool started;  /**< Device run-time state */
+};
+
+/* Forward declarations */
+struct rte_bbdev_ops;
+struct rte_bbdev_callback;
+struct rte_intr_handle;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+
+/**
+ * @internal The data structure associated with a device. Drivers can access
+ * these fields, but should only write to the *_ops fields.
+ */
+struct __rte_cache_aligned rte_bbdev {
+	/**< Enqueue encode function */
+	rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
+	/**< Enqueue decode function */
+	rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
+	/**< Dequeue encode function */
+	rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
+	/**< Dequeue decode function */
+	rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
+	const struct rte_bbdev_ops *dev_ops;  /**< Functions exported by PMD */
+	struct rte_bbdev_data *data;  /**< Pointer to device data */
+	enum rte_bbdev_state state;  /**< If device is currently used or not */
+	struct rte_device *device; /**< Backing device */
+	/** User application callback for interrupts if present */
+	struct rte_bbdev_cb_list list_cbs;
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+};
+
+/** @internal array of all devices */
+extern struct rte_bbdev rte_bbdev_devices[];
+
+/**
+ * Enqueue a burst of processed encode operations to a queue of the device.
+ * This functions only enqueues as many operations as currently possible and
+ * does not block until @p num_ops entries in the queue are available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ * @param ops
+ *   Pointer array containing operations to be enqueued Must have at least
+ *   @p num_ops entries
+ * @param num_ops
+ *   The maximum number of operations to enqueue.
+ *
+ * @return
+ *   The number of operations actually enqueued (this is the number of processed
+ *   entries in the @p ops array).
+ */
+static inline uint16_t
+rte_bbdev_enqueue_enc_ops(uint16_t dev_id, uint16_t queue_id,
+		struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+{
+	struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+	struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+	uint16_t n = dev->enqueue_enc_ops(q_data, ops, num_ops);
+
+	rte_bbdev_log_verbose("%u encode ops enqueued to dev%u,q%u.\n",
+			num_ops, dev_id, queue_id);
+
+	return n;
+}
+
+/**
+ * Enqueue a burst of processed decode operations to a queue of the device.
+ * This functions only enqueues as many operations as currently possible and
+ * does not block until @p num_ops entries in the queue are available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ * @param ops
+ *   Pointer array containing operations to be enqueued Must have at least
+ *   @p num_ops entries
+ * @param num_ops
+ *   The maximum number of operations to enqueue.
+ *
+ * @return
+ *   The number of operations actually enqueued (this is the number of processed
+ *   entries in the @p ops array).
+ */
+static inline uint16_t
+rte_bbdev_enqueue_dec_ops(uint16_t dev_id, uint16_t queue_id,
+		struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+{
+	struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+	struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+	uint16_t n = dev->enqueue_dec_ops(q_data, ops, num_ops);
+
+	rte_bbdev_log_verbose("%u decode ops enqueued to dev%u,q%u.\n",
+			num_ops, dev_id, queue_id);
+
+	return n;
+}
+
+/**
+ * Dequeue a burst of processed encode operations from a queue of the device.
+ * This functions returns only the current contents of the queue, and does not
+ * block until @ num_ops is available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ * @param ops
+ *   Pointer array where operations will be dequeued to. Must have at least
+ *   @p num_ops entries
+ * @param num_ops
+ *   The maximum number of operations to dequeue.
+ *
+ * @return
+ *   The number of operations actually dequeued (this is the number of entries
+ *   copied into the @p ops array).
+ */
+static inline uint16_t
+rte_bbdev_dequeue_enc_ops(uint16_t dev_id, uint16_t queue_id,
+		struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+{
+	struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+	struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+	uint16_t n = dev->dequeue_enc_ops(q_data, ops, num_ops);
+
+	rte_bbdev_log_verbose("%u encode ops dequeued to dev%u,q%u\n",
+			n, dev_id, queue_id);
+
+	return n;
+}
+
+/**
+ * Dequeue a burst of processed decode operations from a queue of the device.
+ * This functions returns only the current contents of the queue, and does not
+ * block until @ num_ops is available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the queue.
+ * @param ops
+ *   Pointer array where operations will be dequeued to. Must have at least
+ *   @p num_ops entries
+ * @param num_ops
+ *   The maximum number of operations to dequeue.
+ *
+ * @return
+ *   The number of operations actually dequeued (this is the number of entries
+ *   copied into the @p ops array).
+ */
+
+static inline uint16_t
+rte_bbdev_dequeue_dec_ops(uint16_t dev_id, uint16_t queue_id,
+		struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+{
+	struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+	struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+	uint16_t n = dev->dequeue_dec_ops(q_data, ops, num_ops);
+
+	rte_bbdev_log_verbose("%u decode ops dequeued to dev%u,q%u\n",
+			n, dev_id, queue_id);
+
+	return n;
+}
+
+/** Definitions of device event types */
+enum rte_bbdev_event_type {
+	RTE_BBDEV_EVENT_UNKNOWN,  /**< unknown event type */
+	RTE_BBDEV_EVENT_ERROR,  /**< error interrupt event */
+	RTE_BBDEV_EVENT_DEQUEUE,  /**< dequeue event */
+	RTE_BBDEV_EVENT_MAX  /**< max value of this enum */
+};
+
+/**
+ * Typedef for application callback function registered by application
+ * software for notification of device events
+ *
+ * @param dev_id
+ *   Device identifier
+ * @param event
+ *   Device event to register for notification of.
+ * @param cb_arg
+ *   User specified parameter to be passed to user's callback function.
+ * @param ret_param
+ *   To pass data back to user application.
+ */
+typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
+		enum rte_bbdev_event_type event, void *cb_arg,
+		void *ret_param);
+
+/**
+ * Register a callback function for specific device id. Multiple callbacks can
+ * be added and will be called in the order they are added when an event is
+ * triggered. Callbacks are called in a separate thread created by the DPDK EAL.
+ *
+ * @param dev_id
+ *   Device id.
+ * @param event
+ *   The event that the callback will be registered for.
+ * @param cb_fn
+ *   User supplied callback function to be called.
+ * @param cb_arg
+ *   Pointer to parameter that will be passed to the callback.
+ *
+ * @return
+ *   Zero on success, negative value on failure.
+ */
+int
+rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
+		rte_bbdev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param dev_id
+ *   The device identifier.
+ * @param event
+ *   The event that the callback will be unregistered for.
+ * @param cb_fn
+ *   User supplied callback function to be unregistered.
+ * @param cb_arg
+ *   Pointer to the parameter supplied when registering the callback.
+ *   (void *)-1 means to remove all registered callbacks with the specified
+ *   function address.
+ *
+ * @return
+ *   - 0 on success
+ *   - EINVAL if invalid parameter pointer is provided
+ *   - EAGAIN if the provided callback pointer does not exist
+ */
+int
+rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
+		rte_bbdev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Enable a one-shot interrupt on the next operation enqueued to a particular
+ * queue. The interrupt will be triggered when the operation is ready to be
+ * dequeued. To handle the interrupt, an epoll file descriptor must be
+ * registered using rte_bbdev_queue_intr_ctl(), and then an application
+ * thread/lcore can wait for the interrupt using rte_epoll_wait().
+ *
+ * @param dev_id
+ *   The device identifier.
+ * @param queue_id
+ *   The index of the queue.
+ *
+ * @return
+ *   - 0 on success
+ *   - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
+
+/**
+ * Disable a one-shot interrupt on the next operation enqueued to a particular
+ * queue (if it has been enabled).
+ *
+ * @param dev_id
+ *   The device identifier.
+ * @param queue_id
+ *   The index of the queue.
+ *
+ * @return
+ *   - 0 on success
+ *   - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
+
+/**
+ * Control interface for per-queue interrupts.
+ *
+ * @param dev_id
+ *   The device identifier.
+ * @param queue_id
+ *   The index of the queue.
+ * @param epfd
+ *   Epoll file descriptor that will be associated with the interrupt source.
+ *   If the special value RTE_EPOLL_PER_THREAD is provided, a per thread epoll
+ *   file descriptor created by the EAL is used (RTE_EPOLL_PER_THREAD can also
+ *   be used when calling rte_epoll_wait()).
+ * @param op
+ *   The operation be performed for the vector.RTE_INTR_EVENT_ADD or
+ *   RTE_INTR_EVENT_DEL.
+ * @param data
+ *   User context, that will be returned in the epdata.data field of the
+ *   rte_epoll_event structure filled in by rte_epoll_wait().
+ *
+ * @return
+ *   - 0 on success
+ *   - ENOTSUP if interrupts are not supported by the identified device
+ *   - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
+		void *data);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_op.h b/lib/librte_bbdev/rte_bbdev_op.h
new file mode 100644
index 0000000..99ad899
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_op.h
@@ -0,0 +1,514 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_BBDEV_OP_H_
+#define _RTE_BBDEV_OP_H_
+
+/**
+ * @file rte_bbdev_op.h
+ *
+ * Defines wireless base band layer 1 operations and capabilities
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+#define RTE_BBDEV_MAX_CODE_BLOCKS 64
+
+extern int bbdev_logtype;
+
+/**
+ * Helper macro for logging
+ *
+ * @param level
+ *   Log level: EMERG, ALERT, CRIT, ERR, WARNING, NOTICE, INFO, or DEBUG
+ * @param fmt
+ *   The format string, as in printf(3).
+ * @param ...
+ *   The variable arguments required by the format string.
+ *
+ * @return
+ *   - 0 on success
+ *   - Negative on error
+ */
+#define rte_bbdev_log(level, fmt, ...) \
+	rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__)
+
+/**
+ * Helper macro for debug logging with extra source info
+ *
+ * @param fmt
+ *   The format string, as in printf(3).
+ * @param ...
+ *   The variable arguments required by the format string.
+ *
+ * @return
+ *   - 0 on success
+ *   - Negative on error
+ */
+#define rte_bbdev_log_debug(fmt, ...) \
+	rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \
+		##__VA_ARGS__)
+
+/**
+ * Helper macro for extra conditional logging from datapath
+ *
+ * @param fmt
+ *   The format string, as in printf(3).
+ * @param ...
+ *   The variable arguments required by the format string.
+ *
+ * @return
+ *   - 0 on success
+ *   - Negative on error
+ */
+#define rte_bbdev_log_verbose(fmt, ...) \
+	(void)((RTE_LOG_DEBUG <= RTE_LOG_DP_LEVEL) ? \
+	rte_log(RTE_LOG_DEBUG, \
+		bbdev_logtype, ": " fmt "\n", ##__VA_ARGS__) : 0)
+
+/** Flags for turbo decoder operation and capability structure */
+enum rte_bbdev_op_td_flag_bitmasks {
+	/** If sub block de-interleaving is to be performed. */
+	RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE = (1ULL << 0),
+	/** To use CRC Type 24B (otherwise use CRC Type 24A). */
+	RTE_BBDEV_TURBO_CRC_TYPE_24B = (1ULL << 1),
+	/** If turbo equalization is to be performed. */
+	RTE_BBDEV_TURBO_EQUALIZER = (1ULL << 2),
+	/** If set, saturate soft output to +/-127 */
+	RTE_BBDEV_TURBO_SOFT_OUT_SATURATE = (1ULL << 3),
+	/**
+	 * Set to 1 to start iteration from even, else odd; one iteration =
+	 * max_iteration + 0.5
+	 */
+	RTE_BBDEV_TURBO_HALF_ITERATION_EVEN = (1ULL << 4),
+	/**
+	 * If 0, TD stops after CRC matches; else if 1, runs to end of next
+	 * odd iteration after CRC matches
+	 */
+	RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH = (1ULL << 5),
+	/** Set if soft output is required to be output  */
+	RTE_BBDEV_TURBO_SOFT_OUTPUT = (1ULL << 6),
+	/** Set to enable early termination mode */
+	RTE_BBDEV_TURBO_EARLY_TERMINATION = (1ULL << 7),
+	/**
+	 * Set if the input is a raw data (E bytes, no NULL bytes). If not set
+	 * the input is a full circular buffer with data (Kw bytes) as decribed
+	 * in spec. 36.212, chapter 5.1.4.1.2.
+	 */
+	RTE_BBDEV_TURBO_RAW_INPUT_DATA = (1ULL << 8),
+	/** Set if a device supports decoder dequeue interrupts */
+	RTE_BBDEV_TURBO_DEC_INTERRUPTS = (1ULL << 9),
+};
+
+/** Flags for turbo encoder operation and capability structure */
+enum rte_bbdev_op_te_flag_bitmasks {
+	/** Ignore rv_index and set K0 = 0 */
+	RTE_BBDEV_TURBO_RV_INDEX_BYPASS = (1ULL << 0),
+	/** If rate matching is to be performed */
+	RTE_BBDEV_TURBO_RATE_MATCH = (1ULL << 1),
+	/** This bit must be set to enable CRC-24B generation */
+	RTE_BBDEV_TURBO_CRC_24B_ATTACH = (1ULL << 2),
+	/** This bit must be set to enable CRC-24A generation */
+	RTE_BBDEV_TURBO_CRC_24A_ATTACH = (1ULL << 3),
+	/** Set if a device supports encoder dequeue interrupts */
+	RTE_BBDEV_TURBO_ENC_INTERRUPTS = (1ULL << 4)
+};
+
+/** Data input and output buffer for BBDEV operations */
+struct rte_bbdev_op_data {
+	struct rte_mbuf *data;
+	/**< First mbuf segment with input/output data. Each segment represents
+	 * one Code Block. For the input operation a mbuf needs to contain
+	 * all Code Blocks. For the output operation the mbuf should consist of
+	 * only one segment and the driver will take care of allocating and
+	 * chaining another segments for the consecutive Code Blocks if needed.
+	 */
+	uint32_t offset;
+	/**< The starting point for the Turbo input/output, in bytes, from the
+	 * start of the first segment's data buffer. It must be smaller than the
+	 * first segment's data_len!
+	 */
+	uint32_t length;
+	/**< Length of Transport Block - number of bytes for Turbo Encode/Decode
+	 * operation for input; length of the output for output operation.
+	 */
+};
+
+struct rte_bbdev_op_dec_cb_params {
+	uint16_t k; /**< size of the input code block in bits (40 - 6144) */
+	uint32_t e; /**< length in bits of the rate match output (17 bits) */
+};
+
+struct rte_bbdev_op_dec_tb_params {
+	/**< size of the input code block in bits (40 - 6144). Used when
+	 * code block index r < c_neg
+	 */
+	uint16_t k_neg;
+	/**< size of the input code block in bits (40 - 6144). Used when
+	 * code block index r >= c_neg
+	 */
+	uint16_t k_pos;
+	uint8_t c_neg; /**< number of code block using k_neg (0 - 63) */
+	uint8_t c; /**< total number of code blocks (1 - 64) */
+	uint8_t cab; /**< number of code blocks using ea before switch to eb */
+	/**< length in bits of the rate match output (17 bits). Used when
+	 * code block index r < cab
+	 */
+	uint32_t ea;
+	/**< length in bits of the rate match output (17 bits). Used when
+	 * code block index r >= cab
+	 */
+	uint32_t eb;
+	uint8_t cb_idx; /**< Code block index */
+};
+
+/** Operation structure for the Turbo Decoder */
+struct rte_bbdev_op_turbo_dec {
+	struct rte_bbdev_op_data input; /**< input src data */
+	struct rte_bbdev_op_data hard_output; /**< hard output buffer */
+	struct rte_bbdev_op_data soft_output; /**< soft output buffer */
+
+	uint32_t op_flags;  /**< Flags from rte_bbdev_op_td_flag_bitmasks */
+	uint8_t rv_index;  /**< Rv index for rate matching (0 - 3) */
+	uint8_t iter_min:4;  /**< min number of iterations */
+	uint8_t iter_max:4;  /**< max number of iterations */
+	uint8_t iter_count;  /**< Actual num. of iterations performed */
+	/** 5 bit extrinsic scale (scale factor on extrinsic info) */
+	uint8_t ext_scale;
+	/** Number of MAP engines, must be power of 2 (or 0 to auto-select) */
+	uint8_t num_maps;
+	uint8_t code_block_mode; /**< 0 - transpot block, 1 - code block */
+	union {
+		/** Struct which stores Code Block specific parameters */
+		struct rte_bbdev_op_dec_cb_params cb_params;
+		/** Struct which stores Transport Block specific parameters */
+		struct rte_bbdev_op_dec_tb_params tb_params;
+	};
+};
+
+struct rte_bbdev_op_enc_cb_params {
+	uint16_t k; /**< size of the input code block in bits (40 - 6144) */
+	uint32_t e; /**< length in bits of the rate match output (17 bits) */
+	uint16_t ncb; /**< Ncb parameter for rate matching, range [k:3(k+4)] */
+};
+
+struct rte_bbdev_op_enc_tb_params {
+	/**< size of the input code block in bits (40 - 6144). Used when
+	 * code block index r < c_neg
+	 */
+	uint16_t k_neg;
+	/**< size of the input code block in bits (40 - 6144). Used when
+	 * code block index r >= c_neg
+	 */
+	uint16_t k_pos;
+	uint8_t c_neg; /**< number of code block using k_neg (0 - 63) */
+	uint8_t c; /**< total number of code blocks (1 - 64) */
+	uint8_t cab; /**< number of code blocks using ea before switch to eb */
+	/**< length in bits of the rate match output (17 bits). Used when
+	 * code block index r < cab
+	 */
+	uint32_t ea;
+	/**< length in bits of the rate match output (17 bits). Used when
+	 * code block index r >= cab
+	 */
+	uint32_t eb;
+	/**< Ncb parameter for rate matching, range [k : 3(k+4)]. Used when
+	 * code block index r < c_neg
+	 */
+	uint16_t ncb_neg;
+	/**< Ncb parameter for rate matching, range [k : 3(k+4)]. Used when
+	 * code block index r >= c_neg
+	 */
+	uint16_t ncb_pos;
+	uint8_t cb_idx; /**< Code block index */
+};
+
+/** Operation structure for the Turbo Encoder */
+struct rte_bbdev_op_turbo_enc {
+	struct rte_bbdev_op_data input; /**< input src data */
+	struct rte_bbdev_op_data output; /**< output buffer */
+
+	uint32_t op_flags;  /**< Flags from rte_bbdev_op_te_flag_bitmasks */
+	int32_t n_soft;  /**< total number of soft bits according to UE cat. */
+	int32_t k_mimo;  /**< MIMO type */
+	int32_t mdl_harq;  /**< the maximum number of DL HARQ processes */
+	/**< total number of bits available for transmission of one TB */
+	int32_t g;
+	int32_t nl;  /**< number of layer */
+	int32_t qm;  /**< modulation type */
+	uint8_t rv_index;  /**< Rv index for rate matching (0 - 3) */
+	uint8_t code_block_mode; /**< 0 - transpot block, 1 - code block */
+	union {
+		/** Struct which stores Code Block specific parameters */
+		struct rte_bbdev_op_enc_cb_params cb_params;
+		/** Struct which stores Transport Block specific parameters */
+		struct rte_bbdev_op_enc_tb_params tb_params;
+	};
+};
+
+/** List of the capabilities for the Turbo Decoder */
+struct rte_bbdev_op_cap_turbo_dec {
+	/** Flags from rte_bbdev_op_td_flag_bitmasks */
+	uint32_t capability_flags;
+	uint8_t num_buffers_src;  /**< Num input code block buffers */
+	/** Num hard output code block buffers */
+	uint8_t num_buffers_hard_out;
+	/** Num soft output code block buffers if supported by the driver */
+	uint8_t num_buffers_soft_out;
+};
+
+/** List of the capabilities for the Turbo Encoder */
+struct rte_bbdev_op_cap_turbo_enc {
+	/** Flags from rte_bbdev_op_te_flag_bitmasks */
+	uint32_t capability_flags;
+	uint8_t num_buffers_src;  /**< Num input code block buffers */
+	uint8_t num_buffers_dst;  /**< Num output code block buffers */
+};
+
+/** Different operation types supported by the device */
+enum rte_bbdev_op_type {
+	RTE_BBDEV_OP_NONE,  /**< Dummy operation that does nothing */
+	RTE_BBDEV_OP_TURBO_DEC,  /**< Turbo decode */
+	RTE_BBDEV_OP_TURBO_ENC,  /**< Turbo encode */
+	RTE_BBDEV_OP_TYPE_COUNT,  /**< Count of different op types */
+};
+
+/** Bit indexes of possible errors reported through status field */
+enum {
+	RTE_BBDEV_DRV_ERROR,
+	RTE_BBDEV_DATA_ERROR,
+	RTE_BBDEV_CRC_ERROR,
+};
+
+/** Structure specifying a single encode operation */
+struct rte_bbdev_enc_op {
+	int status;  /**< Status of operation that was performed */
+	struct rte_mempool *mempool;  /**< Mempool which op instance is in */
+	void *opaque_data;  /**< Opaque pointer for user data */
+	/** Contains encoder specific parameters */
+	struct rte_bbdev_op_turbo_enc turbo_enc;
+};
+
+/** Structure specifying a single decode operation */
+struct rte_bbdev_dec_op {
+	int status;  /**< Status of operation that was performed */
+	struct rte_mempool *mempool;  /**< Mempool which op instance is in */
+	void *opaque_data;  /**< Opaque pointer for user data */
+	/** Contains decoder specific parameters */
+	struct rte_bbdev_op_turbo_dec turbo_dec;
+};
+
+/** Operation capabilities supported by a device */
+struct rte_bbdev_op_cap {
+	enum rte_bbdev_op_type type;  /**< Type of operation */
+	union {
+		struct rte_bbdev_op_cap_turbo_dec turbo_dec;
+		struct rte_bbdev_op_cap_turbo_enc turbo_enc;
+	} cap;  /**< Operation-type specific capabilities */
+};
+
+/** @internal Private data structure stored with operation pool. */
+struct rte_bbdev_op_pool_private {
+	enum rte_bbdev_op_type type;  /**< Type of operations in a pool */
+};
+
+/**
+ * Converts queue operation type from enum to string
+ *
+ * @param op_type
+ *   Operation type as enum
+ *
+ * @returns
+ *   Operation type as string
+ *
+ */
+const char*
+rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type);
+
+/**
+ * Creates a bbdev operation mempool
+ *
+ * @param name
+ *   Pool name.
+ * @param type
+ *   Operation type, use RTE_BBDEV_OP_NONE for a pool which supports all
+ *   operation types.
+ * @param num_elements
+ *   Number of elements in the pool.
+ * @param cache_size
+ *   Number of elements to cache on an lcore, see rte_mempool_create() for
+ *   further details about cache size.
+ * @param socket_id
+ *   Socket to allocate memory on.
+ *
+ * @return
+ *   - Pointer to a mempool on success,
+ *   - NULL pointer on failure.
+ */
+struct rte_mempool *
+rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
+		unsigned int num_elements, unsigned int cache_size,
+		int socket_id);
+
+/**
+ * Bulk allocate encode operations from a mempool with parameter defaults reset.
+ *
+ * @param mempool
+ *   Operation mempool, created by rte_bbdev_op_pool_create().
+ * @param ops
+ *   Output array to place allocated operations
+ * @param num_ops
+ *   Number of operations to allocate
+ *
+ * @returns
+ *   - 0 on success
+ *   - EINVAL if invalid mempool is provided
+ */
+static inline int
+rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
+		struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+{
+	struct rte_bbdev_op_pool_private *priv;
+	int ret;
+
+	/* Check type */
+	priv = (struct rte_bbdev_op_pool_private *)
+			rte_mempool_get_priv(mempool);
+	if (unlikely(priv->type != RTE_BBDEV_OP_TURBO_ENC))
+		return -EINVAL;
+
+	/* Get elements */
+	ret = rte_mempool_get_bulk(mempool, (void **)ops, num_ops);
+	if (unlikely(ret < 0))
+		return ret;
+
+	rte_bbdev_log_verbose("%u encode ops allocated from %s\n",
+			num_ops, mempool->name);
+
+	return 0;
+}
+
+/**
+ * Bulk allocate decode operations from a mempool with parameter defaults reset.
+ *
+ * @param mempool
+ *   Operation mempool, created by rte_bbdev_op_pool_create().
+ * @param ops
+ *   Output array to place allocated operations
+ * @param num_ops
+ *   Number of operations to allocate
+ *
+ * @returns
+ *   - 0 on success
+ *   - EINVAL if invalid mempool is provided
+ */
+static inline int
+rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
+		struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+{
+	struct rte_bbdev_op_pool_private *priv;
+	int ret;
+
+	/* Check type */
+	priv = (struct rte_bbdev_op_pool_private *)
+			rte_mempool_get_priv(mempool);
+	if (unlikely(priv->type != RTE_BBDEV_OP_TURBO_DEC))
+		return -EINVAL;
+
+	/* Get elements */
+	ret = rte_mempool_get_bulk(mempool, (void **)ops, num_ops);
+	if (unlikely(ret < 0))
+		return ret;
+
+	rte_bbdev_log_verbose("%u encode ops allocated from %s\n",
+			num_ops, mempool->name);
+
+	return 0;
+}
+
+/**
+ * Free decode operation structures that were allocated by
+ * rte_bbdev_dec_op_alloc_bulk().
+ * All structures must belong to the same mempool.
+ *
+ * @param ops
+ *   Operation structures
+ * @param num_ops
+ *   Number of structures
+ */
+static inline void
+rte_bbdev_dec_op_free_bulk(struct rte_bbdev_dec_op **ops, unsigned int num_ops)
+{
+	if (num_ops > 0) {
+		rte_mempool_put_bulk(ops[0]->mempool, (void **)ops, num_ops);
+		rte_bbdev_log_verbose("%u decode ops freed to %s\n", num_ops,
+				ops[0]->mempool->name);
+	}
+}
+
+/**
+ * Free encode operation structures that were allocated by
+ * rte_bbdev_enc_op_alloc_bulk().
+ * All structures must belong to the same mempool.
+ *
+ * @param ops
+ *   Operation structures
+ * @param num_ops
+ *   Number of structures
+ */
+static inline void
+rte_bbdev_enc_op_free_bulk(struct rte_bbdev_enc_op **ops, unsigned int num_ops)
+{
+	if (num_ops > 0) {
+		rte_mempool_put_bulk(ops[0]->mempool, (void **)ops, num_ops);
+		rte_bbdev_log_verbose("%u encode ops freed to %s\n", num_ops,
+				ops[0]->mempool->name);
+	}
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_OP_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_pci.h b/lib/librte_bbdev/rte_bbdev_pci.h
new file mode 100644
index 0000000..1a32132
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_pci.h
@@ -0,0 +1,288 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_BBDEV_PCI_H_
+#define _RTE_BBDEV_PCI_H_
+
+/**
+ * @file rte_bbdev_pci.h
+ *
+ * Wireless base band PCI-driver-facing APIs.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API provides the helper functions for device PCI drivers to register
+ * with the bbdev interface. User applications should not use this API.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_log.h>
+#include <rte_pci.h>
+
+#include "rte_bbdev_pmd.h"
+
+/**
+ * @internal
+ * Initialisation function of a HW driver invoked for each matching HW device
+ * detected during the EAL initialisation phase, or when a new device is
+ * attached. The driver should initialise the device and its own software
+ * context.
+ *
+ * @param dev
+ *   This is a new device structure instance that is associated with the
+ *   matching device.
+ *   The driver *must* populate the following fields:
+ *    - dev_ops
+ *    - enqueue_ops
+ *    - dequeue_ops
+ *
+ * @return
+ *   - 0 on success
+ */
+typedef int (*rte_bbdev_init_t)(struct rte_bbdev *dev);
+
+/**
+ * @internal
+ * Finalization function of a HW driver invoked for each matching HW device
+ * detected during the closing phase, or when a device is detached.
+ *
+ * @param dev
+ *   The device structure instance that is associated with the matching device.
+ *
+ * @return
+ *   - 0 on success
+ */
+typedef int (*rte_bbdev_uninit_t)(struct rte_bbdev *dev);
+
+/**
+ * @internal
+ * Allocates a new bbdev slot for an PCI device and returns the pointer to that
+ * slot for the driver to use.
+ *
+ * @param dev
+ *   Pointer to the PCI device
+ *
+ * @param private_data_size
+ *   Size of private data structure
+ *
+ * @return
+ *   A pointer to a rte_bbdev or NULL if allocation failed.
+ */
+static inline struct rte_bbdev *
+rte_bbdev_pci_allocate(struct rte_pci_device *dev, size_t private_data_size)
+{
+	const char *name;
+	struct rte_bbdev *bbdev = NULL;
+
+	if (dev == NULL) {
+		rte_bbdev_log(ERR, "NULL PCI device");
+		return NULL;
+	}
+
+	name = dev->device.name;
+
+	/* Allocate memory to be used privately by drivers */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		bbdev = rte_bbdev_allocate(name);
+		if (!bbdev)
+			return NULL;
+
+		if (private_data_size) {
+			bbdev->data->dev_private = rte_zmalloc_socket(name,
+					private_data_size, RTE_CACHE_LINE_SIZE,
+					dev->device.numa_node);
+			if (bbdev->data->dev_private == NULL) {
+				rte_bbdev_log(CRIT,
+						"Allocate of %zu bytes for device \"%s\"failed",
+						private_data_size, name);
+				rte_bbdev_release(bbdev);
+				return NULL;
+			}
+		}
+	}
+
+	bbdev->data->socket_id = dev->device.numa_node;
+	return bbdev;
+}
+
+static inline void
+rte_bbdev_pci_release(struct rte_bbdev *bbdev)
+{
+	int ret;
+	uint16_t dev_id = bbdev->data->dev_id;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(bbdev->data->dev_private);
+
+	ret = rte_bbdev_release(bbdev);
+	if (ret)
+		rte_bbdev_log(ERR, "Device %i failed to uninit: %i", dev_id,
+				ret);
+
+	rte_bbdev_log_debug("Un-initialised HW device id = %u", dev_id);
+}
+
+/**
+ * @internal
+ * Wrapper for use by pci drivers as a .probe function to attach to a bbdev
+ * interface.
+ */
+static inline int
+rte_bbdev_pci_generic_probe(struct rte_pci_device *pci_dev,
+		size_t private_data_size,
+		rte_bbdev_init_t dev_init)
+{
+	struct rte_bbdev *bbdev = NULL;
+	char dev_name[RTE_BBDEV_NAME_MAX_LEN];
+	int ret;
+
+	rte_pci_device_name(&pci_dev->addr, dev_name, sizeof(dev_name));
+
+	bbdev = rte_bbdev_pci_allocate(pci_dev, private_data_size);
+	if (bbdev == NULL)
+		return -ENOMEM;
+
+	/* Fill HW specific part of device structure */
+	bbdev->device = &pci_dev->device;
+	bbdev->intr_handle = &pci_dev->intr_handle;
+
+	/* Invoke PMD device initialization function */
+	if (dev_init) {
+		ret = dev_init(bbdev);
+
+		if (bbdev->dev_ops == NULL) {
+			rte_bbdev_log(ERR, "NULL dev_ops structure in device %u",
+					bbdev->data->dev_id);
+			return -ENODEV;
+		}
+
+		if (bbdev->enqueue_enc_ops == NULL) {
+			rte_bbdev_log(ERR,
+					"NULL enqueue_enc_ops structure in device %u",
+					bbdev->data->dev_id);
+			return -ENODEV;
+		}
+		if (bbdev->enqueue_dec_ops == NULL) {
+			rte_bbdev_log(ERR,
+					"NULL enqueue_dec_ops structure in device %u",
+					bbdev->data->dev_id);
+			return -ENODEV;
+		}
+		if (bbdev->dequeue_enc_ops == NULL) {
+			rte_bbdev_log(ERR,
+					"NULL dequeue_enc_ops structure in device %u",
+					bbdev->data->dev_id);
+			return -ENODEV;
+		}
+		if (bbdev->dequeue_dec_ops == NULL) {
+			rte_bbdev_log(ERR,
+					"NULL dequeue_dec_ops structure in device %u",
+					bbdev->data->dev_id);
+			return -ENODEV;
+		}
+
+		if (ret < 0) {
+			rte_bbdev_log(ERR,
+					"Driver %s(vendor_id=0x%x device_id=0x%x): failed: %i",
+					pci_dev->driver->driver.name,
+					pci_dev->id.vendor_id,
+					pci_dev->id.device_id, ret);
+			rte_bbdev_pci_release(bbdev);
+			return -ENXIO;
+		}
+	} else {
+		rte_bbdev_log(ERR,
+				"Device init function doesn't exist for driver %s",
+				pci_dev->driver->driver.name);
+		return -ENODEV;
+	}
+
+	rte_bbdev_log_debug("Initialised HW device %s (id = %u)",
+			dev_name, bbdev->data->dev_id);
+	return 0;
+}
+
+/**
+ * @internal
+ * Wrapper for use by pci drivers as a .remove function to detach a bbdev
+ * interface.
+ */
+static inline int
+rte_bbdev_pci_generic_remove(struct rte_pci_device *pci_dev,
+		rte_bbdev_uninit_t dev_uninit)
+{
+	struct rte_bbdev *bbdev;
+	int ret;
+	uint8_t dev_id;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Find device */
+	bbdev = rte_bbdev_get_named_dev(pci_dev->device.name);
+	if (bbdev == NULL) {
+		rte_bbdev_log(CRIT,
+				"Couldn't find HW dev \"%s\" to uninitialise it",
+				pci_dev->device.name);
+		return -ENODEV;
+	}
+	dev_id = bbdev->data->dev_id;
+
+	/* Close device before uninit */
+	ret = rte_bbdev_close(dev_id);
+	if (ret < 0)
+		rte_bbdev_log(ERR,
+				"Device %i failed to close during uninit: %i",
+				dev_id, ret);
+
+	/* Invoke PMD device uninit function */
+	if (dev_uninit) {
+		ret = dev_uninit(bbdev);
+		if (ret)
+			rte_bbdev_log(ERR, "Device %i failed to uninit: %i",
+					dev_id, ret);
+	}
+
+	rte_bbdev_pci_release(bbdev);
+	return 0;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_PCI_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_pmd.h b/lib/librte_bbdev/rte_bbdev_pmd.h
new file mode 100644
index 0000000..cf65de0
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_pmd.h
@@ -0,0 +1,223 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_BBDEV_PMD_H_
+#define _RTE_BBDEV_PMD_H_
+
+/**
+ * @file rte_bbdev_pmd.h
+ *
+ * Wireless base band driver-facing APIs.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API provides the mechanism for device drivers to register with the
+ * bbdev interface. User applications should not use this API.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_log.h>
+
+#include "rte_bbdev.h"
+
+/** Suggested value for SW based devices */
+#define RTE_BBDEV_DEFAULT_MAX_NB_QUEUES RTE_MAX_LCORE
+
+/** Suggested value for SW based devices */
+#define RTE_BBDEV_QUEUE_SIZE_LIMIT 16384
+
+/**
+ * @internal
+ * Allocates a new slot for a bbdev and returns the pointer to that slot
+ * for the driver to use.
+ *
+ * @param name
+ *   Unique identifier name for each bbdev device
+ *
+ * @return
+ *   - Slot in the rte_bbdev array for a new device;
+ */
+struct rte_bbdev *
+rte_bbdev_allocate(const char *name);
+
+/**
+ * @internal
+ * Release the specified bbdev.
+ *
+ * @param bbdev
+ *   The *bbdev* pointer is the address of the *rte_bbdev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+int
+rte_bbdev_release(struct rte_bbdev *bbdev);
+
+/**
+ * Get the device structure for a named device.
+ *
+ * @param name
+ *   Name of the device
+ *
+ * @return
+ *   - The device structure pointer, or
+ *   - NULL otherwise
+ *
+ */
+struct rte_bbdev *
+rte_bbdev_get_named_dev(const char *name);
+
+/**
+ * Definitions of all functions exported by a driver through the the generic
+ * structure of type *rte_bbdev_ops* supplied in the *rte_bbdev* structure
+ * associated with a device.
+ */
+
+/** @internal Function used to configure device memory. */
+typedef int (*rte_bbdev_setup_queues_t)(struct rte_bbdev *dev,
+		uint16_t num_queues, int socket_id);
+
+/** @internal Function used to configure interrupts for a device. */
+typedef int (*rte_bbdev_intr_enable_t)(struct rte_bbdev *dev);
+
+/** @internal Function to allocate and configure a device queue. */
+typedef int (*rte_bbdev_queue_setup_t)(struct rte_bbdev *dev,
+		uint16_t queue_id, const struct rte_bbdev_queue_conf *conf);
+
+/* @internal
+ * Function to release memory resources allocated for a device queue.
+ */
+typedef int (*rte_bbdev_queue_release_t)(struct rte_bbdev *dev,
+		uint16_t queue_id);
+
+/** @internal Function to start a configured device. */
+typedef int (*rte_bbdev_start_t)(struct rte_bbdev *dev);
+
+/** @internal Function to stop a device. */
+typedef void (*rte_bbdev_stop_t)(struct rte_bbdev *dev);
+
+/** @internal Function to close a device. */
+typedef int (*rte_bbdev_close_t)(struct rte_bbdev *dev);
+
+/** @internal Function to start a device queue. */
+typedef int (*rte_bbdev_queue_start_t)(struct rte_bbdev *dev,
+		uint16_t queue_id);
+
+/** @internal Function to stop a device queue. */
+typedef int (*rte_bbdev_queue_stop_t)(struct rte_bbdev *dev, uint16_t queue_id);
+
+/** @internal Function to read stats from a device. */
+typedef void (*rte_bbdev_stats_get_t)(struct rte_bbdev *dev,
+		struct rte_bbdev_stats *stats);
+
+/** @internal Function to reset stats on a device. */
+typedef void (*rte_bbdev_stats_reset_t)(struct rte_bbdev *dev);
+
+/** @internal Function to retrieve specific information of a device. */
+typedef void (*rte_bbdev_info_get_t)(struct rte_bbdev *dev,
+		struct rte_bbdev_driver_info *dev_info);
+
+/* @internal
+ * Function to enable interrupt for next op on a queue of a device.
+ */
+typedef int (*rte_bbdev_queue_intr_enable_t)(struct rte_bbdev *dev,
+				    uint16_t queue_id);
+
+/* @internal
+ * Function to disable interrupt for next op on a queue of a device.
+ */
+typedef int (*rte_bbdev_queue_intr_disable_t)(struct rte_bbdev *dev,
+				    uint16_t queue_id);
+
+/**
+ * Operations implemented by drivers. Fields marked as "Required" must be
+ * provided by a driver for a device to have basic functionality. "Optional"
+ * fields are for non-vital operations
+ */
+struct rte_bbdev_ops {
+	/**< Allocate and configure device memory. Optional. */
+	rte_bbdev_setup_queues_t setup_queues;
+	/**< Configure interrupts. Optional. */
+	rte_bbdev_intr_enable_t intr_enable;
+	/**< Start device. Optional. */
+	rte_bbdev_start_t start;
+	/**< Stop device. Optional. */
+	rte_bbdev_stop_t stop;
+	/**< Close device. Optional. */
+	rte_bbdev_close_t close;
+
+	/**< Get device info. Required. */
+	rte_bbdev_info_get_t info_get;
+	/** Get device statistics. Optional. */
+	rte_bbdev_stats_get_t stats_get;
+	/** Reset device statistics. Optional. */
+	rte_bbdev_stats_reset_t stats_reset;
+
+	/** Set up a device queue. Required. */
+	rte_bbdev_queue_setup_t queue_setup;
+	/** Release a queue. Required. */
+	rte_bbdev_queue_release_t queue_release;
+	/** Start a queue. Optional. */
+	rte_bbdev_queue_start_t queue_start;
+	/**< Stop a queue pair. Optional. */
+	rte_bbdev_queue_stop_t queue_stop;
+
+	/** Enable queue interrupt. Optional */
+	rte_bbdev_queue_intr_enable_t queue_intr_enable;
+	/** Disable queue interrupt. Optional */
+	rte_bbdev_queue_intr_disable_t queue_intr_disable;
+};
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device and event type.
+ *
+ * @param dev
+ *   Pointer to the device structure.
+ * @param event
+ *   Event type.
+ * @param ret_param
+ *   To pass data back to user application.
+ */
+void
+rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
+	enum rte_bbdev_event_type event, void *ret_param);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_PMD_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_vdev.h b/lib/librte_bbdev/rte_bbdev_vdev.h
new file mode 100644
index 0000000..fbaef2e
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_vdev.h
@@ -0,0 +1,102 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_BBDEV_VDEV_H_
+#define _RTE_BBDEV_VDEV_H_
+
+/**
+ * @file rte_bbdev_vdev.h
+ *
+ * Wireless base band virtual device driver-facing APIs.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API provides the helper functions for virtual device drivers to register
+ * with the bbdev interface. User applications should not use this API.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_vdev.h>
+
+#include "rte_bbdev_pmd.h"
+
+/**
+ * @internal
+ * Allocates a new slot for a virtual bbdev and returns the pointer to that slot
+ * for the driver to use.
+ *
+ * @param dev
+ *   Pointer to virtual device
+ *
+ * @param private_data_size
+ *   Size of private data structure
+ *
+ * @return
+ *   A pointer to a rte_bbdev or NULL if allocation failed.
+ */
+static inline struct rte_bbdev *
+rte_bbdev_vdev_allocate(struct rte_vdev_device *dev, size_t private_data_size)
+{
+	struct rte_bbdev *bbdev;
+	const char *name = rte_vdev_device_name(dev);
+
+	bbdev = rte_bbdev_allocate(name);
+	if (!bbdev)
+		return NULL;
+
+	if (private_data_size) {
+		bbdev->data->dev_private = rte_zmalloc_socket(name,
+				private_data_size, RTE_CACHE_LINE_SIZE,
+				dev->device.numa_node);
+		if (!bbdev->data->dev_private) {
+			rte_bbdev_release(bbdev);
+			return NULL;
+		}
+	}
+
+	bbdev->data->socket_id = dev->device.numa_node;
+	bbdev->device = &dev->device;
+	bbdev->intr_handle = NULL;
+
+	return bbdev;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_VDEV_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_version.map b/lib/librte_bbdev/rte_bbdev_version.map
new file mode 100644
index 0000000..316a275
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_version.map
@@ -0,0 +1,37 @@
+EXPERIMENTAL {
+	global:
+
+	rte_bbdev_allocate;
+	rte_bbdev_callback_register;
+	rte_bbdev_callback_unregister;
+	rte_bbdev_close;
+	rte_bbdev_setup_queues;
+	rte_bbdev_intr_enable;
+	rte_bbdev_count;
+	rte_bbdev_dequeue_dec_ops;
+	rte_bbdev_dequeue_enc_ops;
+	rte_bbdev_devices;
+	rte_bbdev_enqueue_dec_ops;
+	rte_bbdev_enqueue_enc_ops;
+	rte_bbdev_find_next;
+	rte_bbdev_get_named_dev;
+	rte_bbdev_info_get;
+	rte_bbdev_is_valid;
+	rte_bbdev_op_pool_create;
+	rte_bbdev_op_type_str;
+	rte_bbdev_pmd_callback_process;
+	rte_bbdev_queue_configure;
+	rte_bbdev_queue_info_get;
+	rte_bbdev_queue_intr_ctl;
+	rte_bbdev_queue_intr_disable;
+	rte_bbdev_queue_intr_enable;
+	rte_bbdev_queue_start;
+	rte_bbdev_queue_stop;
+	rte_bbdev_release;
+	rte_bbdev_start;
+	rte_bbdev_stats_get;
+	rte_bbdev_stats_reset;
+	rte_bbdev_stop;
+
+	local: *;
+};
\ No newline at end of file
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8192b98..e0f9d13 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -94,6 +94,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_BBDEV)          += -lrte_bbdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING)   += -lrte_mempool_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
@@ -156,6 +157,18 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VHOST)      += -lrte_pmd_vhost
 endif # $(CONFIG_RTE_LIBRTE_VHOST)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD)    += -lrte_pmd_vmxnet3_uio
 
+ifeq ($(CONFIG_RTE_LIBRTE_BBDEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL)     += -lrte_pmd_bbdev_null
+
+# TURBO SOFTWARE PMD is dependent on the FLEXRAN library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -lrte_pmd_bbdev_turbo_sw
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_common -lcommon
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_crc -lcrc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_turbo -lturbo
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_rate_matching -lrate_matching
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -lirc -limf -lstdc++ -lipps
+endif # CONFIG_RTE_LIBRTE_BBDEV
+
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)    += -lrte_pmd_aesni_mb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)    += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
-- 
2.7.4

             reply	other threads:[~2017-10-18  2:15 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-18  2:14 Amr Mokhtar [this message]
2017-10-18  2:14 ` [dpdk-dev] [PATCH v2 2/5] bbdev: PMD drivers (null/turbo_sw) Amr Mokhtar
2017-10-18  2:14 ` [dpdk-dev] [PATCH v2 3/5] bbdev: test applications Amr Mokhtar
2017-10-18  2:14 ` [dpdk-dev] [PATCH v2 4/5] bbdev: sample app Amr Mokhtar
2017-10-18  2:14 ` [dpdk-dev] [PATCH v2 5/5] bbdev: documentation Amr Mokhtar
2017-10-18  2:14 ` [dpdk-dev] [PATCH v2 0/5] Wireless Base Band Device (bbdev) Amr Mokhtar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1508292886-31405-1-git-send-email-amr.mokhtar@intel.com \
    --to=amr.mokhtar@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=chris.macnamara@intel.com \
    --cc=dev@dpdk.org \
    --cc=niall.power@intel.com \
    --cc=pablo.de.lara.guarch@intel.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).