* [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib
@ 2018-01-11 19:23 Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 2/5] bbdev: null device driver Amr Mokhtar
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Amr Mokhtar @ 2018-01-11 19:23 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara, Amr Mokhtar
- wireless baseband device (bbdev) library files
- bbdev is tagged as EXPERIMENTAL
- Makefiles and configuration macros definition
- bbdev library is enabled by default
- release notes of the initial version
Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf | 1 +
doc/guides/prog_guide/bbdev.rst | 585 +++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/rel_notes/release_18_02.rst | 12 +
lib/Makefile | 2 +
lib/librte_bbdev/Makefile | 28 +
lib/librte_bbdev/rte_bbdev.c | 1117 ++++++++++++++++++++++++++++++++
lib/librte_bbdev/rte_bbdev.h | 715 ++++++++++++++++++++
lib/librte_bbdev/rte_bbdev_op.h | 638 ++++++++++++++++++
lib/librte_bbdev/rte_bbdev_pmd.h | 198 ++++++
lib/librte_bbdev/rte_bbdev_version.map | 37 ++
mk/rte.app.mk | 1 +
15 files changed, 3347 insertions(+)
create mode 100644 doc/guides/prog_guide/bbdev.rst
create mode 100644 lib/librte_bbdev/Makefile
create mode 100644 lib/librte_bbdev/rte_bbdev.c
create mode 100644 lib/librte_bbdev/rte_bbdev.h
create mode 100644 lib/librte_bbdev/rte_bbdev_op.h
create mode 100644 lib/librte_bbdev/rte_bbdev_pmd.h
create mode 100644 lib/librte_bbdev/rte_bbdev_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index b51c2d0..ad4166b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -271,6 +271,11 @@ F: lib/librte_cryptodev/
F: test/test/test_cryptodev*
F: examples/l2fwd-crypto/
+BBDEV API - EXPERIMENTAL
+M: Amr Mokhtar <amr.mokhtar@intel.com>
+F: lib/librte_bbdev/
+F: doc/guides/prog_guide/bbdev.rst
+
Security API - EXPERIMENTAL
M: Akhil Goyal <akhil.goyal@nxp.com>
M: Declan Doherty <declan.doherty@intel.com>
diff --git a/config/common_base b/config/common_base
index e74febe..2f5e4ab 100644
--- a/config/common_base
+++ b/config/common_base
@@ -593,6 +593,12 @@ CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV_DEBUG=n
CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF_DEBUG=n
+# Compile generic wireless base band device library
+# EXPERIMENTAL: API may change without prior notice
+#
+CONFIG_RTE_LIBRTE_BBDEV=y
+CONFIG_RTE_BBDEV_MAX_DEVS=128
+
#
# Compile librte_ring
#
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 3492702..8d7ff89 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -50,6 +50,7 @@ The public API headers are grouped by topics:
[bitrate] (@ref rte_bitrate.h),
[latency] (@ref rte_latencystats.h),
[devargs] (@ref rte_devargs.h),
+ [bbdev] (@ref rte_bbdev.h),
[PCI] (@ref rte_pci.h)
- **device specific**:
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index b2cbe94..241cae3 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -39,6 +39,7 @@ INPUT = doc/api/doxy-api-index.md \
lib/librte_eal/common/include \
lib/librte_eal/common/include/generic \
lib/librte_acl \
+ lib/librte_bbdev \
lib/librte_bitratestats \
lib/librte_cfgfile \
lib/librte_cmdline \
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
new file mode 100644
index 0000000..d40c7f4
--- /dev/null
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -0,0 +1,585 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2017 Intel Corporation
+
+Wireless Baseband Device Library
+================================
+
+The Wireless Baseband library provides a common programming framework that
+abstracts HW accelerators based on FPGA and/or Fixed Function Accelerators that
+assist with 3gpp Physical Layer processing. Furthermore, it decouples the
+application from the compute-intensive wireless functions by abstracting their
+optimized libraries to appear as virtual bbdev devices.
+
+The functional scope of the BBDEV library are those functions in relation to
+the 3gpp Layer 1 signal processing (channel coding, modulation, ...).
+
+The framework currently only supports Turbo Code FEC function.
+
+
+Design Principles
+-----------------
+
+The Wireless Baseband library follows the same ideology of DPDK's Ethernet
+Device and Crypto Device frameworks. Wireless Baseband provides a generic
+acceleration abstraction framework which supports both physical (hardware) and
+virtual (software) wireless acceleration functions.
+
+Device Management
+-----------------
+
+Device Creation
+~~~~~~~~~~~~~~~
+
+Physical bbdev devices are discovered during the PCI probe/enumeration of the
+EAL function which is executed at DPDK initialization, based on
+their PCI device identifier, each unique PCI BDF (bus/bridge, device,
+function).
+
+Virtual devices can be created by two mechanisms, either using the EAL command
+line options or from within the application using an EAL API directly.
+
+From the command line using the --vdev EAL option
+
+.. code-block:: console
+
+ --vdev 'turbo_sw,max_nb_queues=8,socket_id=0'
+
+Our using the rte_vdev_init API within the application code.
+
+.. code-block:: c
+
+ rte_vdev_init("turbo_sw", "max_nb_queues=2,socket_id=0")
+
+All virtual bbdev devices support the following initialization parameters:
+
+- ``max_nb_queues`` - maximum number of queues supported by the device.
+
+- ``socket_id`` - socket on which to allocate the device resources on.
+
+
+Device Identification
+~~~~~~~~~~~~~~~~~~~~~
+
+Each device, whether virtual or physical is uniquely designated by two
+identifiers:
+
+- A unique device index used to designate the bbdev device in all functions
+ exported by the bbdev API.
+
+- A device name used to designate the bbdev device in console messages, for
+ administration or debugging purposes. For ease of use, the port name includes
+ the port index.
+
+
+Device Configuration
+~~~~~~~~~~~~~~~~~~~~
+
+From the application point of view, each instance of a bbdev device consists of
+one or more queues identified by queue IDs. While different devices may have
+different capabilities (e.g. support different operation types), all queues on
+a device support identical configuration possibilities. A queue is configured
+for only one type of operation and is configured at initializations time.
+When an operation is enqueued to a specific queue ID, the result is dequeued
+from the same queue ID.
+
+Configuration of a device has two different levels: configuration that applies
+to the whole device, and configuration that applies to a single queue.
+
+Device configuration is applied with
+``rte_bbdev_setup_queues(dev_id,num_queues,socket_id)``
+and queue configuration is applied with
+``rte_bbdev_queue_configure(dev_id,queue_id,conf)``. Note that, although all
+queues on a device support same capabilities, they can be configured differently
+and will then behave differently.
+Devices supporting interrupts can enable them by using
+``rte_bbdev_intr_enable(dev_id)``.
+
+The configuration of each bbdev device includes the following operations:
+
+- Allocation of resources, including hardware resources if a physical device.
+- Resetting the device into a well-known default state.
+- Initialization of statistics counters.
+
+The ``rte_bbdev_setup_queues`` API is used to setup queues for a bbdev device.
+
+.. code-block:: c
+
+ int rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues,
+ int socket_id);
+
+- ``num_queues`` argument identifies the total number of queues to setup for
+ this device.
+
+- ``socket_id`` specifies which socket will be used to allocate the memory.
+
+
+The ``rte_bbdev_intr_enable`` API is used to enable interrupts for a bbdev
+device, if supported by the driver. Should be called before starting the device.
+
+.. code-block:: c
+
+ int rte_bbdev_intr_enable(uint16_t dev_id);
+
+
+Queues Configuration
+~~~~~~~~~~~~~~~~~~~~
+
+Each bbdev devices queue is individually configured through the
+``rte_bbdev_queue_configure()`` API.
+Each queue resources may be allocated on a specified socket.
+
+.. code-block:: c
+
+ struct rte_bbdev_queue_conf {
+ int socket;
+ uint32_t queue_size;
+ uint8_t priority;
+ bool deferred_start;
+ enum rte_bbdev_op_type op_type;
+ };
+
+Device & Queues Management
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After initialization, devices are in a stopped state, so must be started by the
+application. If an application is finished using a device it can close the
+device. Once closed, it cannot be restarted.
+
+.. code-block:: c
+
+ int rte_bbdev_start(uint16_t dev_id)
+ int rte_bbdev_stop(uint16_t dev_id)
+ int rte_bbdev_close(uint16_t dev_id)
+ int rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
+ int rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id)
+
+
+By default, all queues are started when the device is started, but they can be
+stopped individually.
+
+.. code-block:: c
+
+ int rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
+ int rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id)
+
+
+Logical Cores, Memory and Queues Relationships
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The bbdev device Library as the Poll Mode Driver library support NUMA for when
+a processor’s logical cores and interfaces utilize its local memory. Therefore
+baseband operations, the mbuf being operated on should be allocated from memory
+pools created in the local memory. The buffers should, if possible, remain on
+the local processor to obtain the best performance results and buffer
+descriptors should be populated with mbufs allocated from a mempool allocated
+from local memory.
+
+The run-to-completion model also performs better, especially in the case of
+virtual bbdev devices, if the baseband operation and data buffers are in local
+memory instead of a remote processor's memory. This is also true for the
+pipe-line model provided all logical cores used are located on the same processor.
+
+Multiple logical cores should never share the same queue for enqueuing
+operations or dequeuing operations on the same bbdev device since this would
+require global locks and hinder performance. It is however possible to use a
+different logical core to dequeue an operation on a queue pair from the logical
+core which it was enqueued on. This means that a baseband burst enqueue/dequeue
+APIs are a logical place to transition from one logical core to another in a
+packet processing pipeline.
+
+
+Device Operation Capabilities
+-----------------------------
+
+Capabilities (in terms of operations supported, max number of queues, etc.)
+identify what a bbdev is capable of performing that differs from one device to
+another. For the full scope of the bbdev capability see the definition of the
+structure in the *DPDK API Reference*.
+
+.. code-block:: c
+
+ struct rte_bbdev_op_cap;
+
+A device reports its capabilities when registering itself in the bbdev framework.
+With the aid of this capabilities mechanism, an application can query devices to
+discover which operations within the 3gpp physical layer they are capable of
+performing. Below is an example of the capabilities for a PMD it supports in
+relation to Turbo Encoding and Decoding operations.
+
+.. code-block:: c
+
+ static const struct rte_bbdev_op_cap bbdev_capabilities[] = {
+ {
+ .type = RTE_BBDEV_OP_TURBO_DEC,
+ .cap.turbo_dec = {
+ .capability_flags =
+ RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE |
+ RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN |
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN |
+ RTE_BBDEV_TURBO_CRC_TYPE_24B,
+ .num_buffers_src = RTE_BBDEV_MAX_CODE_BLOCKS,
+ .num_buffers_hard_out =
+ RTE_BBDEV_MAX_CODE_BLOCKS,
+ .num_buffers_soft_out = 0,
+ }
+ },
+ {
+ .type = RTE_BBDEV_OP_TURBO_ENC,
+ .cap.turbo_enc = {
+ .capability_flags =
+ RTE_BBDEV_TURBO_CRC_24B_ATTACH |
+ RTE_BBDEV_TURBO_RATE_MATCH |
+ RTE_BBDEV_TURBO_RV_INDEX_BYPASS,
+ .num_buffers_src = RTE_BBDEV_MAX_CODE_BLOCKS,
+ .num_buffers_dst = RTE_BBDEV_MAX_CODE_BLOCKS,
+ }
+ },
+ RTE_BBDEV_END_OF_CAPABILITIES_LIST()
+ };
+
+Capabilities Discovery
+~~~~~~~~~~~~~~~~~~~~~~
+
+Discovering the features and capabilities of a bbdev device poll mode driver
+is achieved through the ``rte_bbdev_info_get()`` function.
+
+.. code-block:: c
+
+ int rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
+
+This allows the user to query a specific bbdev PMD and get all the device
+capabilities. The ``rte_bbdev_info`` structure provides two levels of
+information:
+
+- Device relevant information, like: name and related rte_bus.
+
+- Driver specific information, as defined by the ``struct rte_bbdev_driver_info``
+ structure, this is where capabilities reside along with other specifics like:
+ maximum queue sizes and priority level.
+
+.. code-block:: c
+
+ struct rte_bbdev_info {
+ int socket_id;
+ const char *dev_name;
+ const struct rte_bus *bus;
+ uint16_t num_queues;
+ bool started;
+ struct rte_bbdev_driver_info drv;
+ };
+
+Operation Processing
+--------------------
+
+Scheduling of baseband operations on DPDK's application data path is
+performed using a burst oriented asynchronous API set. A queue on a bbdev
+device accepts a burst of baseband operations using enqueue burst API. On physical
+bbdev devices the enqueue burst API will place the operations to be processed
+on the device's hardware input queue, for virtual devices the processing of the
+baseband operations is usually completed during the enqueue call to the bbdev
+device. The dequeue burst API will retrieve any processed operations available
+from the queue on the bbdev device, from physical devices this is usually
+directly from the device's processed queue, and for virtual device's from a
+``rte_ring`` where processed operations are place after being processed on the
+enqueue call.
+
+
+Enqueue / Dequeue Burst APIs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The burst enqueue API uses a bbdev device identifier and a queue
+identifier to specify the bbdev device queue to schedule the processing on.
+The ``num_ops`` parameter is the number of operations to process which are
+supplied in the ``ops`` array of ``rte_bbdev_*_op`` structures.
+The enqueue function returns the number of operations it actually enqueued for
+processing, a return value equal to ``num_ops`` means that all packets have been
+enqueued.
+
+.. code-block:: c
+
+ uint16_t rte_bbdev_enqueue_enc_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+
+ uint16_t rte_bbdev_enqueue_dec_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+
+The dequeue API uses the same format as the enqueue API of processed but
+the ``num_ops`` and ``ops`` parameters are now used to specify the max processed
+operations the user wishes to retrieve and the location in which to store them.
+The API call returns the actual number of processed operations returned, this
+can never be larger than ``num_ops``.
+
+.. code-block:: c
+
+ uint16_t rte_bbdev_dequeue_enc_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+
+ uint16_t rte_bbdev_dequeue_dec_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+
+Operation Representation
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+An encode bbdev operation is represented by ``rte_bbdev_enc_op`` structure,
+and by ``rte_bbdev_dec_op`` for decode. These structures act as metadata
+containers for all necessary information required for the bbdev operation to be
+processed on a particular bbdev device poll mode driver.
+
+.. code-block:: c
+
+ struct rte_bbdev_enc_op {
+ int status;
+ struct rte_mempool *mempool;
+ void *opaque_data;
+ struct rte_bbdev_op_turbo_enc turbo_enc;
+ };
+
+ struct rte_bbdev_dec_op {
+ int status;
+ struct rte_mempool *mempool;
+ void *opaque_data;
+ struct rte_bbdev_op_turbo_dec turbo_dec;
+ };
+
+The operation structure by itself defines the operation type. It includes an
+operation status, a reference to the operation specific data, which can vary in
+size and content depending on the operation being provisioned. It also contains
+the source mempool for the operation, if it is allocated from a mempool.
+
+If bbdev operations are allocated from a bbdev operation mempool, see next
+section, there is also the ability to allocate private memory with the
+operation for applications purposes.
+
+Application software is responsible for specifying all the operation specific
+fields in the ``rte_bbdev_*_op`` structure which are then used by the bbdev PMD
+to process the requested operation.
+
+
+Operation Management and Allocation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The bbdev library provides an API set for managing bbdev operations which
+utilize the Mempool Library to allocate operation buffers. Therefore, it ensures
+that the bbdev operation is interleaved optimally across the channels and
+ranks for optimal processing.
+
+.. code-block:: c
+
+ struct rte_mempool *
+ rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
+ unsigned int num_elements, unsigned int cache_size,
+ int socket_id)
+
+``rte_bbdev_*_op_alloc_bulk()`` and ``rte_bbdev_*_op_free_bulk()`` are used to
+allocate bbdev operations of a specific type from a given bbdev operation mempool.
+
+.. code-block:: c
+
+ int rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
+ struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+
+ int rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
+ struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+
+``rte_bbdev_*_op_free_bulk()`` is called by the application to return an
+operation to its allocating pool.
+
+.. code-block:: c
+
+ void rte_bbdev_dec_op_free_bulk(struct rte_bbdev_dec_op **ops,
+ unsigned int num_ops)
+ void rte_bbdev_enc_op_free_bulk(struct rte_bbdev_enc_op **ops,
+ unsigned int num_ops)
+
+BBDEV Operations
+~~~~~~~~~~~~~~~~
+
+The bbdev operation structure contains all the mutable data relating to
+performing Turbo code processing on a referenced mbuf data buffer. It is used
+for either encode or decode operations.
+
+Turbo Encode operation accepts one input and one output.
+
+Turbo Decode operation accepts one input and two outputs, called *hard-decision*
+and *soft-decision* outputs. *Soft-decision* output is optional.
+
+It is expected that the application provides input and output ``mbuf`` pointers
+allocated and ready to use. The baseband framework supports turbo coding on
+Code Blocks (CB) and Transport Blocks (TB).
+
+For the output buffer(s), the application needs only to provide an allocated and
+free mbuf (containing only one mbuf segment), so that bbdev can write the
+operation outcome.
+
+**Turbo Encode Op structure**
+
+.. code-block:: c
+
+ struct rte_bbdev_op_turbo_enc {
+ struct rte_bbdev_op_data input;
+ struct rte_bbdev_op_data output;
+
+ uint32_t op_flags;
+ uint8_t rv_index;
+ uint8_t code_block_mode;
+ union {
+ struct rte_bbdev_op_enc_cb_params cb_params;
+ struct rte_bbdev_op_enc_tb_params tb_params;
+ };
+ };
+
+
+**Turbo Decode Op structure**
+
+.. code-block:: c
+
+ struct rte_bbdev_op_turbo_dec {
+ struct rte_bbdev_op_data input;
+ struct rte_bbdev_op_data hard_output;
+ struct rte_bbdev_op_data soft_output;
+
+ uint32_t op_flags;
+ uint8_t rv_index;
+ uint8_t iter_min:4;
+ uint8_t iter_max:4;
+ uint8_t iter_count;
+ uint8_t ext_scale;
+ uint8_t num_maps;
+ uint8_t code_block_mode;
+ union {
+ struct rte_bbdev_op_dec_cb_params cb_params;
+ struct rte_bbdev_op_dec_tb_params tb_params;
+ };
+ };
+
+Input and output data buffers are identified by ``rte_bbdev_op_data`` structure.
+This structure has three elements:
+
+- ``data`` - This is the mbuf reference
+
+- ``offset`` - The starting point for the Turbo input/output, in bytes, from the
+ start of the data in the data buffer. It must be smaller than data_len of the
+ mbuf's first segment
+
+- ``length`` - The length, in bytes, of the buffer on which the Turbo operation
+ will or has been computed. For the input, the length is set by the application.
+ For the output(s), the length is computed by the bbdev PMD driver.
+
+Sample code
+-----------
+
+The baseband device sample application gives an introduction on how to use the
+bbdev framework, by giving a sample code performing a loop-back operation with a
+baseband processor capable of transceiving data packets.
+
+The following sample C-like pseudo-code shows the basic steps to encode several
+buffers using (**sw_trubo**) bbdev PMD.
+
+.. code-block:: c
+
+ /* EAL Init */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+
+ /* Get number of available bbdev devices */
+ nb_bbdevs = rte_bbdev_count();
+ if (nb_bbdevs == 0)
+ rte_exit(EXIT_FAILURE, "No bbdevs detected!\n");
+
+ /* Create bbdev op pools */
+ bbdev_op_pool[RTE_BBDEV_OP_TURBO_ENC] =
+ rte_bbdev_op_pool_create("bbdev_op_pool_enc",
+ RTE_BBDEV_OP_TURBO_ENC, NB_MBUF, 128, rte_socket_id());
+
+ /* Get information for this device */
+ rte_bbdev_info_get(dev_id, &info);
+
+ /* Setup BBDEV device queues */
+ ret = rte_bbdev_setup_queues(dev_id, qs_nb, info.socket_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "ERROR(%d): BBDEV %u not configured properly\n",
+ ret, dev_id);
+
+ /* setup device queues */
+ qconf.socket = info.socket_id;
+ qconf.queue_size = info.drv.queue_size_lim;
+ qconf.op_type = RTE_BBDEV_OP_TURBO_ENC;
+
+ for (q_id = 0; q_id < qs_nb; q_id++) {
+ /* Configure all queues belonging to this bbdev device */
+ ret = rte_bbdev_queue_configure(dev_id, q_id, &qconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "ERROR(%d): BBDEV %u queue %u not configured properly\n",
+ ret, dev_id, q_id);
+ }
+
+ /* Start bbdev device */
+ ret = rte_bbdev_start(dev_id);
+
+ /* Create the mbuf mempool for pkts */
+ mbuf_pool = rte_pktmbuf_pool_create("bbdev_mbuf_pool",
+ NB_MBUF, MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+ if (mbuf_pool == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Unable to create '%s' pool\n", pool_name);
+
+ while (!global_exit_flag) {
+
+ /* Allocate burst of op structures in preparation for enqueue */
+ if (rte_bbdev_enc_op_alloc_bulk(bbdev_op_pool[RTE_BBDEV_OP_TURBO_ENC],
+ ops_burst, op_num) != 0)
+ continue;
+
+ /* Allocate input mbuf pkts */
+ ret = rte_pktmbuf_alloc_bulk(mbuf_pool, input_pkts_burst, MAX_PKT_BURST);
+ if (ret < 0)
+ continue;
+
+ /* Allocate output mbuf pkts */
+ ret = rte_pktmbuf_alloc_bulk(mbuf_pool, output_pkts_burst, MAX_PKT_BURST);
+ if (ret < 0)
+ continue;
+
+ for (j = 0; j < op_num; j++) {
+ /* Append the size of the ethernet header */
+ rte_pktmbuf_append(input_pkts_burst[j],
+ sizeof(struct ether_hdr));
+
+ /* set op */
+
+ ops_burst[j]->turbo_enc.input.offset =
+ sizeof(struct ether_hdr);
+
+ ops_burst[j]->turbo_enc->input.length =
+ rte_pktmbuf_pkt_len(bbdev_pkts[j]);
+
+ ops_burst[j]->turbo_enc->input.data =
+ input_pkts_burst[j];
+
+ ops_burst[j]->turbo_enc->output.offset =
+ sizeof(struct ether_hdr);
+
+ ops_burst[j]->turbo_enc->output.data =
+ output_pkts_burst[j];
+ }
+
+ /* Enqueue packets on BBDEV device */
+ op_num = rte_bbdev_enqueue_enc_ops(qconf->bbdev_id,
+ qconf->bbdev_qs[q], ops_burst,
+ MAX_PKT_BURST);
+
+ /* Dequeue packets from BBDEV device*/
+ op_num = rte_bbdev_dequeue_enc_ops(qconf->bbdev_id,
+ qconf->bbdev_qs[q], ops_burst,
+ MAX_PKT_BURST);
+ }
+
+
+BBDEV Device API
+~~~~~~~~~~~~~~~~
+
+The bbdev Library API is described in the *DPDK API Reference* document.
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index c4beb34..84027fd 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -66,6 +66,7 @@ Programmer's Guide
kernel_nic_interface
thread_safety_dpdk_functions
eventdev
+ bbdev
event_ethernet_rx_adapter
qos_framework
power_man
diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst
index 24b67bb..fc23318 100644
--- a/doc/guides/rel_notes/release_18_02.rst
+++ b/doc/guides/rel_notes/release_18_02.rst
@@ -41,6 +41,17 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
+* **Added Wireless Base Band Device (bbdev) abstraction.**
+
+ The Wireless Baseband Device library is an acceleration abstraction
+ framework for 3gpp Layer 1 processing functions that provides a common
+ programming interface for seamless opeartion on integrated or discrete
+ hardware accelerators or using optimized software libraries for signal
+ processing.
+ The current release only supports 3GPP CRC, Turbo Coding and Rate
+ Matching operations, as specified in 3GPP TS 36.212.
+
+ See the :doc:`../prog_guide/bbdev` programmer's guide for more details.
API Changes
-----------
@@ -117,6 +128,7 @@ The libraries prepended with a plus sign were incremented in this version.
.. code-block:: diff
librte_acl.so.2
+ + librte_bbdev.so.1
librte_bitratestats.so.2
librte_bus_dpaa.so.1
librte_bus_fslmc.so.1
diff --git a/lib/Makefile b/lib/Makefile
index 4202702..a1a8aa9 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -30,6 +30,8 @@ DEPDIRS-librte_security += librte_ether
DEPDIRS-librte_security += librte_cryptodev
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
+DIRS-$(CONFIG_RTE_LIBRTE_BBDEV) += librte_bbdev
+DEPDIRS-librte_bbdev := librte_eal librte_mempool librte_mbuf
DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
DEPDIRS-librte_vhost := librte_eal librte_mempool librte_mbuf librte_ether
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
diff --git a/lib/librte_bbdev/Makefile b/lib/librte_bbdev/Makefile
new file mode 100644
index 0000000..f9bf960
--- /dev/null
+++ b/lib/librte_bbdev/Makefile
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_bbdev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf
+
+# library source files
+SRCS-y += rte_bbdev.c
+
+# export include files
+SYMLINK-y-include += rte_bbdev_op.h
+SYMLINK-y-include += rte_bbdev.h
+SYMLINK-y-include += rte_bbdev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_bbdev_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_bbdev/rte_bbdev.c b/lib/librte_bbdev/rte_bbdev.c
new file mode 100644
index 0000000..8a053e3
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev.c
@@ -0,0 +1,1117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_lcore.h>
+#include <rte_dev.h>
+#include <rte_spinlock.h>
+#include <rte_tailq.h>
+#include <rte_interrupts.h>
+
+#include "rte_bbdev_op.h"
+#include "rte_bbdev.h"
+#include "rte_bbdev_pmd.h"
+
+#define DEV_NAME "BBDEV"
+
+
+/* Helper macro to check dev_id is valid */
+#define VALID_DEV_OR_RET_ERR(dev, dev_id) do { \
+ if (dev == NULL) { \
+ rte_bbdev_log(ERR, "device %u is invalid", dev_id); \
+ return -ENODEV; \
+ } \
+} while (0)
+
+/* Helper macro to check dev_ops is valid */
+#define VALID_DEV_OPS_OR_RET_ERR(dev, dev_id) do { \
+ if (dev->dev_ops == NULL) { \
+ rte_bbdev_log(ERR, "NULL dev_ops structure in device %u", \
+ dev_id); \
+ return -ENODEV; \
+ } \
+} while (0)
+
+/* Helper macro to check that driver implements required function pointer */
+#define VALID_FUNC_OR_RET_ERR(func, dev_id) do { \
+ if (func == NULL) { \
+ rte_bbdev_log(ERR, "device %u does not support %s", \
+ dev_id, #func); \
+ return -ENOTSUP; \
+ } \
+} while (0)
+
+/* Helper macro to check that queue is valid */
+#define VALID_QUEUE_OR_RET_ERR(queue_id, dev) do { \
+ if (queue_id >= dev->data->num_queues) { \
+ rte_bbdev_log(ERR, "Invalid queue_id %u for device %u", \
+ queue_id, dev->data->dev_id); \
+ return -ERANGE; \
+ } \
+} while (0)
+
+/* List of callback functions registered by an application */
+struct rte_bbdev_callback {
+ TAILQ_ENTRY(rte_bbdev_callback) next; /* Callbacks list */
+ rte_bbdev_cb_fn cb_fn; /* Callback address */
+ void *cb_arg; /* Parameter for callback */
+ void *ret_param; /* Return parameter */
+ enum rte_bbdev_event_type event; /* Interrupt event type */
+ uint32_t active; /* Callback is executing */
+};
+
+/* spinlock for bbdev device callbacks */
+static rte_spinlock_t rte_bbdev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
+/*
+ * Global array of all devices. This is not static because it's used by the
+ * inline enqueue and dequeue functions
+ */
+struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
+
+/* Global array with rte_bbdev_data structures */
+static struct rte_bbdev_data *rte_bbdev_data;
+
+/* Memzone name for global bbdev data pool */
+static const char *MZ_RTE_BBDEV_DATA = "rte_bbdev_data";
+
+/* Number of currently valid devices */
+static uint16_t num_devs;
+
+/* Return pointer to device structure, with validity check */
+static struct rte_bbdev *
+get_dev(uint16_t dev_id)
+{
+ if (rte_bbdev_is_valid(dev_id))
+ return &rte_bbdev_devices[dev_id];
+ return NULL;
+}
+
+/* Allocate global data array */
+static int
+rte_bbdev_data_alloc(void)
+{
+ const unsigned int flags = 0;
+ const struct rte_memzone *mz;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ mz = rte_memzone_reserve(MZ_RTE_BBDEV_DATA,
+ RTE_BBDEV_MAX_DEVS * sizeof(*rte_bbdev_data),
+ rte_socket_id(), flags);
+ } else
+ mz = rte_memzone_lookup(MZ_RTE_BBDEV_DATA);
+ if (mz == NULL) {
+ rte_bbdev_log(CRIT,
+ "Cannot allocate memzone for bbdev port data");
+ return -ENOMEM;
+ }
+
+ rte_bbdev_data = mz->addr;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ memset(rte_bbdev_data, 0,
+ RTE_BBDEV_MAX_DEVS * sizeof(*rte_bbdev_data));
+ return 0;
+}
+
+/*
+ * Find data alocated for the device or if not found return first unused bbdev
+ * data. If all structures are in use and none is used by the device return
+ * NULL.
+ */
+static struct rte_bbdev_data *
+find_bbdev_data(const char *name)
+{
+ uint16_t data_id;
+
+ for (data_id = 0; data_id < RTE_BBDEV_MAX_DEVS; ++data_id) {
+ if (strlen(rte_bbdev_data[data_id].name) == 0) {
+ memset(&rte_bbdev_data[data_id], 0,
+ sizeof(struct rte_bbdev_data));
+ return &rte_bbdev_data[data_id];
+ } else if (strncmp(rte_bbdev_data[data_id].name, name,
+ RTE_BBDEV_NAME_MAX_LEN) == 0)
+ return &rte_bbdev_data[data_id];
+ }
+
+ return NULL;
+}
+
+/* Find lowest device id with no attached device */
+static uint16_t
+find_free_dev_id(void)
+{
+ uint16_t i;
+ for (i = 0; i < RTE_BBDEV_MAX_DEVS; i++) {
+ if (rte_bbdev_devices[i].state == RTE_BBDEV_UNUSED)
+ return i;
+ }
+ return RTE_BBDEV_MAX_DEVS;
+}
+
+struct rte_bbdev *
+rte_bbdev_allocate(const char *name)
+{
+ int ret;
+ struct rte_bbdev *bbdev;
+ uint16_t dev_id;
+
+ if (name == NULL) {
+ rte_bbdev_log(ERR, "Invalid null device name");
+ return NULL;
+ }
+
+ if (rte_bbdev_get_named_dev(name) != NULL) {
+ rte_bbdev_log(ERR, "Device \"%s\" is already allocated", name);
+ return NULL;
+ }
+
+ dev_id = find_free_dev_id();
+ if (dev_id == RTE_BBDEV_MAX_DEVS) {
+ rte_bbdev_log(ERR, "Reached maximum number of devices");
+ return NULL;
+ }
+
+ bbdev = &rte_bbdev_devices[dev_id];
+
+ if (rte_bbdev_data == NULL) {
+ ret = rte_bbdev_data_alloc();
+ if (ret != 0)
+ return NULL;
+ }
+
+ bbdev->data = find_bbdev_data(name);
+ if (bbdev->data == NULL) {
+ rte_bbdev_log(ERR,
+ "Max BBDevs already allocated in multi-process environment!");
+ return NULL;
+ }
+
+ rte_atomic16_inc(&bbdev->data->process_cnt);
+ bbdev->data->dev_id = dev_id;
+ bbdev->state = RTE_BBDEV_INITIALIZED;
+
+ ret = snprintf(bbdev->data->name, RTE_BBDEV_NAME_MAX_LEN, "%s", name);
+ if ((ret < 0) || (ret >= RTE_BBDEV_NAME_MAX_LEN)) {
+ rte_bbdev_log(ERR, "Copying device name \"%s\" failed", name);
+ return NULL;
+ }
+
+ /* init user callbacks */
+ TAILQ_INIT(&(bbdev->list_cbs));
+
+ num_devs++;
+
+ rte_bbdev_log_debug("Initialised device %s (id = %u). Num devices = %u",
+ name, dev_id, num_devs);
+
+ return bbdev;
+}
+
+int
+rte_bbdev_release(struct rte_bbdev *bbdev)
+{
+ uint16_t dev_id;
+ struct rte_bbdev_callback *cb, *next;
+
+ if (bbdev == NULL) {
+ rte_bbdev_log(ERR, "NULL bbdev");
+ return -ENODEV;
+ }
+ dev_id = bbdev->data->dev_id;
+
+ /* free all callbacks from the device's list */
+ for (cb = TAILQ_FIRST(&bbdev->list_cbs); cb != NULL; cb = next) {
+
+ next = TAILQ_NEXT(cb, next);
+ TAILQ_REMOVE(&(bbdev->list_cbs), cb, next);
+ rte_free(cb);
+ }
+
+ /* clear shared BBDev Data if no process is using the device anymore */
+ if (rte_atomic16_dec_and_test(&bbdev->data->process_cnt))
+ memset(bbdev->data, 0, sizeof(*bbdev->data));
+
+ memset(bbdev, 0, sizeof(*bbdev));
+ num_devs--;
+ bbdev->state = RTE_BBDEV_UNUSED;
+
+ rte_bbdev_log_debug(
+ "Un-initialised device id = %u. Num devices = %u",
+ dev_id, num_devs);
+ return 0;
+}
+
+struct rte_bbdev *
+rte_bbdev_get_named_dev(const char *name)
+{
+ unsigned int i;
+
+ if (name == NULL) {
+ rte_bbdev_log(ERR, "NULL driver name");
+ return NULL;
+ }
+
+ for (i = 0; i < RTE_BBDEV_MAX_DEVS; i++) {
+ struct rte_bbdev *dev = get_dev(i);
+ if (dev && (strncmp(dev->data->name,
+ name, RTE_BBDEV_NAME_MAX_LEN) == 0))
+ return dev;
+ }
+
+ return NULL;
+}
+
+uint16_t
+rte_bbdev_count(void)
+{
+ return num_devs;
+}
+
+bool
+rte_bbdev_is_valid(uint16_t dev_id)
+{
+ if ((dev_id < RTE_BBDEV_MAX_DEVS) &&
+ rte_bbdev_devices[dev_id].state == RTE_BBDEV_INITIALIZED)
+ return true;
+ return false;
+}
+
+uint16_t
+rte_bbdev_find_next(uint16_t dev_id)
+{
+ dev_id++;
+ for (; dev_id < RTE_BBDEV_MAX_DEVS; dev_id++)
+ if (rte_bbdev_is_valid(dev_id))
+ break;
+ return dev_id;
+}
+
+int
+rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id)
+{
+ unsigned int i;
+ int ret;
+ struct rte_bbdev_driver_info dev_info;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (dev->data->started) {
+ rte_bbdev_log(ERR,
+ "Device %u cannot be configured when started",
+ dev_id);
+ return -EBUSY;
+ }
+
+ /* Get device driver information to get max number of queues */
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->info_get, dev_id);
+ memset(&dev_info, 0, sizeof(dev_info));
+ dev->dev_ops->info_get(dev, &dev_info);
+
+ if ((num_queues == 0) || (num_queues > dev_info.max_num_queues)) {
+ rte_bbdev_log(ERR,
+ "Device %u supports 0 < N <= %u queues, not %u",
+ dev_id, dev_info.max_num_queues, num_queues);
+ return -EINVAL;
+ }
+
+ /* If re-configuration, get driver to free existing internal memory */
+ if (dev->data->queues != NULL) {
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_release, dev_id);
+ for (i = 0; i < dev->data->num_queues; i++) {
+ int ret = dev->dev_ops->queue_release(dev, i);
+ if (ret < 0) {
+ rte_bbdev_log(ERR,
+ "Device %u queue %u release failed",
+ dev_id, i);
+ return ret;
+ }
+ }
+ /* Call optional device close */
+ if (dev->dev_ops->close) {
+ ret = dev->dev_ops->close(dev);
+ if (ret < 0) {
+ rte_bbdev_log(ERR,
+ "Device %u couldn't be closed",
+ dev_id);
+ return ret;
+ }
+ }
+ rte_free(dev->data->queues);
+ }
+
+ /* Allocate queue pointers */
+ dev->data->queues = rte_calloc_socket(DEV_NAME, num_queues,
+ sizeof(dev->data->queues[0]), RTE_CACHE_LINE_SIZE,
+ dev->data->socket_id);
+ if (dev->data->queues == NULL) {
+ rte_bbdev_log(ERR,
+ "calloc of %u queues for device %u on socket %i failed",
+ num_queues, dev_id, dev->data->socket_id);
+ return -ENOMEM;
+ }
+
+ dev->data->num_queues = num_queues;
+
+ /* Call optional device configuration */
+ if (dev->dev_ops->setup_queues) {
+ ret = dev->dev_ops->setup_queues(dev, num_queues, socket_id);
+ if (ret < 0) {
+ rte_bbdev_log(ERR,
+ "Device %u memory configuration failed",
+ dev_id);
+ goto error;
+ }
+ }
+
+ rte_bbdev_log_debug("Device %u set up with %u queues", dev_id,
+ num_queues);
+ return 0;
+
+error:
+ dev->data->num_queues = 0;
+ rte_free(dev->data->queues);
+ dev->data->queues = NULL;
+ return ret;
+}
+
+int
+rte_bbdev_intr_enable(uint16_t dev_id)
+{
+ int ret;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (dev->data->started) {
+ rte_bbdev_log(ERR,
+ "Device %u cannot be configured when started",
+ dev_id);
+ return -EBUSY;
+ }
+
+ if (dev->dev_ops->intr_enable) {
+ ret = dev->dev_ops->intr_enable(dev);
+ if (ret < 0) {
+ rte_bbdev_log(ERR,
+ "Device %u interrupts configuration failed",
+ dev_id);
+ return ret;
+ }
+ rte_bbdev_log_debug("Enabled interrupts for dev %u", dev_id);
+ return 0;
+ }
+
+ rte_bbdev_log(ERR, "Device %u doesn't support interrupts", dev_id);
+ return -ENOTSUP;
+}
+
+int
+rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
+ const struct rte_bbdev_queue_conf *conf)
+{
+ int ret = 0;
+ struct rte_bbdev_driver_info dev_info;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ const struct rte_bbdev_op_cap *p;
+ struct rte_bbdev_queue_conf *stored_conf;
+ const char *op_type_str;
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+ if (dev->data->queues[queue_id].started || dev->data->started) {
+ rte_bbdev_log(ERR,
+ "Queue %u of device %u cannot be configured when started",
+ queue_id, dev_id);
+ return -EBUSY;
+ }
+
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_release, dev_id);
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_setup, dev_id);
+
+ /* Get device driver information to verify config is valid */
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->info_get, dev_id);
+ memset(&dev_info, 0, sizeof(dev_info));
+ dev->dev_ops->info_get(dev, &dev_info);
+
+ /* Check configuration is valid */
+ if (conf != NULL) {
+ if ((conf->op_type == RTE_BBDEV_OP_NONE) &&
+ (dev_info.capabilities[0].type ==
+ RTE_BBDEV_OP_NONE)) {
+ ret = 1;
+ } else {
+ for (p = dev_info.capabilities;
+ p->type != RTE_BBDEV_OP_NONE; p++) {
+ if (conf->op_type == p->type) {
+ ret = 1;
+ break;
+ }
+ }
+ }
+ if (ret == 0) {
+ rte_bbdev_log(ERR, "Invalid operation type");
+ return -EINVAL;
+ }
+ if (conf->queue_size > dev_info.queue_size_lim) {
+ rte_bbdev_log(ERR,
+ "Size (%u) of queue %u of device %u must be: <= %u",
+ conf->queue_size, queue_id, dev_id,
+ dev_info.queue_size_lim);
+ return -EINVAL;
+ }
+ if (!rte_is_power_of_2(conf->queue_size)) {
+ rte_bbdev_log(ERR,
+ "Size (%u) of queue %u of device %u must be a power of 2",
+ conf->queue_size, queue_id, dev_id);
+ return -EINVAL;
+ }
+ if (conf->priority > dev_info.max_queue_priority) {
+ rte_bbdev_log(ERR,
+ "Priority (%u) of queue %u of bdev %u must be <= %u",
+ conf->priority, queue_id, dev_id,
+ dev_info.max_queue_priority);
+ return -EINVAL;
+ }
+ }
+
+ /* Release existing queue (in case of queue reconfiguration) */
+ if (dev->data->queues[queue_id].queue_private != NULL) {
+ ret = dev->dev_ops->queue_release(dev, queue_id);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u queue %u release failed",
+ dev_id, queue_id);
+ return ret;
+ }
+ }
+
+ /* Get driver to setup the queue */
+ ret = dev->dev_ops->queue_setup(dev, queue_id, (conf != NULL) ?
+ conf : &dev_info.default_queue_conf);
+ if (ret < 0) {
+ rte_bbdev_log(ERR,
+ "Device %u queue %u setup failed", dev_id,
+ queue_id);
+ return ret;
+ }
+
+ /* Store configuration */
+ stored_conf = &dev->data->queues[queue_id].conf;
+ memcpy(stored_conf,
+ (conf != NULL) ? conf : &dev_info.default_queue_conf,
+ sizeof(*stored_conf));
+
+ op_type_str = rte_bbdev_op_type_str(stored_conf->op_type);
+ if (op_type_str == NULL)
+ return -EINVAL;
+
+ rte_bbdev_log_debug("Configured dev%uq%u (size=%u, type=%s, prio=%u)",
+ dev_id, queue_id, stored_conf->queue_size, op_type_str,
+ stored_conf->priority);
+
+ return 0;
+}
+
+int
+rte_bbdev_start(uint16_t dev_id)
+{
+ int i;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (dev->data->started) {
+ rte_bbdev_log_debug("Device %u is already started", dev_id);
+ return 0;
+ }
+
+ if (dev->dev_ops->start) {
+ int ret = dev->dev_ops->start(dev);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u start failed", dev_id);
+ return ret;
+ }
+ }
+
+ /* Store new state */
+ for (i = 0; i < dev->data->num_queues; i++)
+ if (!dev->data->queues[i].conf.deferred_start)
+ dev->data->queues[i].started = true;
+ dev->data->started = true;
+
+ rte_bbdev_log_debug("Started device %u", dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_stop(uint16_t dev_id)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (!dev->data->started) {
+ rte_bbdev_log_debug("Device %u is already stopped", dev_id);
+ return 0;
+ }
+
+ if (dev->dev_ops->stop)
+ dev->dev_ops->stop(dev);
+ dev->data->started = false;
+
+ rte_bbdev_log_debug("Stopped device %u", dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_close(uint16_t dev_id)
+{
+ int ret;
+ uint16_t i;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (dev->data->started) {
+ ret = rte_bbdev_stop(dev_id);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u stop failed", dev_id);
+ return ret;
+ }
+ }
+
+ /* Free memory used by queues */
+ for (i = 0; i < dev->data->num_queues; i++) {
+ ret = dev->dev_ops->queue_release(dev, i);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u queue %u release failed",
+ dev_id, i);
+ return ret;
+ }
+ }
+ rte_free(dev->data->queues);
+
+ if (dev->dev_ops->close) {
+ ret = dev->dev_ops->close(dev);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u close failed", dev_id);
+ return ret;
+ }
+ }
+
+ /* Clear configuration */
+ dev->data->queues = NULL;
+ dev->data->num_queues = 0;
+
+ rte_bbdev_log_debug("Closed device %u", dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+ if (dev->data->queues[queue_id].started) {
+ rte_bbdev_log_debug("Queue %u of device %u already started",
+ queue_id, dev_id);
+ return 0;
+ }
+
+ if (dev->dev_ops->queue_start) {
+ int ret = dev->dev_ops->queue_start(dev, queue_id);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u queue %u start failed",
+ dev_id, queue_id);
+ return ret;
+ }
+ }
+ dev->data->queues[queue_id].started = true;
+
+ rte_bbdev_log_debug("Started queue %u of device %u", queue_id, dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+ if (!dev->data->queues[queue_id].started) {
+ rte_bbdev_log_debug("Queue %u of device %u already stopped",
+ queue_id, dev_id);
+ return 0;
+ }
+
+ if (dev->dev_ops->queue_stop) {
+ int ret = dev->dev_ops->queue_stop(dev, queue_id);
+ if (ret < 0) {
+ rte_bbdev_log(ERR, "Device %u queue %u stop failed",
+ dev_id, queue_id);
+ return ret;
+ }
+ }
+ dev->data->queues[queue_id].started = false;
+
+ rte_bbdev_log_debug("Stopped queue %u of device %u", queue_id, dev_id);
+ return 0;
+}
+
+/* Get device statistics */
+static void
+get_stats_from_queues(struct rte_bbdev *dev, struct rte_bbdev_stats *stats)
+{
+ unsigned int q_id;
+ for (q_id = 0; q_id < dev->data->num_queues; q_id++) {
+ struct rte_bbdev_stats *q_stats =
+ &dev->data->queues[q_id].queue_stats;
+
+ stats->enqueued_count += q_stats->enqueued_count;
+ stats->dequeued_count += q_stats->dequeued_count;
+ stats->enqueue_err_count += q_stats->enqueue_err_count;
+ stats->dequeue_err_count += q_stats->dequeue_err_count;
+ }
+ rte_bbdev_log_debug("Got stats on %u", dev->data->dev_id);
+}
+
+static void
+reset_stats_in_queues(struct rte_bbdev *dev)
+{
+ unsigned int q_id;
+ for (q_id = 0; q_id < dev->data->num_queues; q_id++) {
+ struct rte_bbdev_stats *q_stats =
+ &dev->data->queues[q_id].queue_stats;
+
+ memset(q_stats, 0, sizeof(*q_stats));
+ }
+ rte_bbdev_log_debug("Reset stats on %u", dev->data->dev_id);
+}
+
+int
+rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (stats == NULL) {
+ rte_bbdev_log(ERR, "NULL stats structure");
+ return -EINVAL;
+ }
+
+ memset(stats, 0, sizeof(*stats));
+ if (dev->dev_ops->stats_get != NULL)
+ dev->dev_ops->stats_get(dev, stats);
+ else
+ get_stats_from_queues(dev, stats);
+
+ rte_bbdev_log_debug("Retrieved stats of device %u", dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_stats_reset(uint16_t dev_id)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+
+ if (dev->dev_ops->stats_reset != NULL)
+ dev->dev_ops->stats_reset(dev);
+ else
+ reset_stats_in_queues(dev);
+
+ rte_bbdev_log_debug("Reset stats of device %u", dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->info_get, dev_id);
+
+ if (dev_info == NULL) {
+ rte_bbdev_log(ERR, "NULL dev info structure");
+ return -EINVAL;
+ }
+
+ /* Copy data maintained by device interface layer */
+ memset(dev_info, 0, sizeof(*dev_info));
+ dev_info->dev_name = dev->data->name;
+ dev_info->num_queues = dev->data->num_queues;
+ dev_info->bus = rte_bus_find_by_device(dev->device);
+ dev_info->socket_id = dev->data->socket_id;
+ dev_info->started = dev->data->started;
+
+ /* Copy data maintained by device driver layer */
+ dev->dev_ops->info_get(dev, &dev_info->drv);
+
+ rte_bbdev_log_debug("Retrieved info of device %u", dev_id);
+ return 0;
+}
+
+int
+rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_queue_info *queue_info)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+ if (queue_info == NULL) {
+ rte_bbdev_log(ERR, "NULL queue info structure");
+ return -EINVAL;
+ }
+
+ /* Copy data to output */
+ memset(queue_info, 0, sizeof(*queue_info));
+ queue_info->conf = dev->data->queues[queue_id].conf;
+ queue_info->started = dev->data->queues[queue_id].started;
+
+ rte_bbdev_log_debug("Retrieved info of queue %u of device %u",
+ queue_id, dev_id);
+ return 0;
+}
+
+/* Calculate size needed to store bbdev_op, depending on type */
+static unsigned int
+get_bbdev_op_size(enum rte_bbdev_op_type type)
+{
+ unsigned int result = 0;
+ switch (type) {
+ case RTE_BBDEV_OP_NONE:
+ result = RTE_MAX(sizeof(struct rte_bbdev_dec_op),
+ sizeof(struct rte_bbdev_enc_op));
+ break;
+ case RTE_BBDEV_OP_TURBO_DEC:
+ result = sizeof(struct rte_bbdev_dec_op);
+ break;
+ case RTE_BBDEV_OP_TURBO_ENC:
+ result = sizeof(struct rte_bbdev_enc_op);
+ break;
+ default:
+ break;
+ }
+
+ return result;
+}
+
+/* Initialise a bbdev_op structure */
+static void
+bbdev_op_init(struct rte_mempool *mempool, void *arg, void *element,
+ __rte_unused unsigned int n)
+{
+ enum rte_bbdev_op_type type = *(enum rte_bbdev_op_type *)arg;
+
+ if (type == RTE_BBDEV_OP_TURBO_DEC) {
+ struct rte_bbdev_dec_op *op = element;
+ memset(op, 0, mempool->elt_size);
+ op->mempool = mempool;
+ } else if (type == RTE_BBDEV_OP_TURBO_ENC) {
+ struct rte_bbdev_enc_op *op = element;
+ memset(op, 0, mempool->elt_size);
+ op->mempool = mempool;
+ }
+}
+
+struct rte_mempool *
+rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
+ unsigned int num_elements, unsigned int cache_size,
+ int socket_id)
+{
+ struct rte_bbdev_op_pool_private *priv;
+ struct rte_mempool *mp;
+ const char *op_type_str;
+
+ if (name == NULL) {
+ rte_bbdev_log(ERR, "NULL name for op pool");
+ return NULL;
+ }
+
+ if (type >= RTE_BBDEV_OP_TYPE_COUNT) {
+ rte_bbdev_log(ERR,
+ "Invalid op type (%u), should be less than %u",
+ type, RTE_BBDEV_OP_TYPE_COUNT);
+ return NULL;
+ }
+
+ mp = rte_mempool_create(name, num_elements, get_bbdev_op_size(type),
+ cache_size, sizeof(struct rte_bbdev_op_pool_private),
+ NULL, NULL, bbdev_op_init, &type, socket_id, 0);
+ if (mp == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to create op pool %s (num ops=%u, op size=%u) with error: %s",
+ name, num_elements, get_bbdev_op_size(type),
+ rte_strerror(rte_errno));
+ return NULL;
+ }
+
+ op_type_str = rte_bbdev_op_type_str(type);
+ if (op_type_str == NULL)
+ return NULL;
+
+ rte_bbdev_log_debug(
+ "Op pool %s created for %u ops (type=%s, cache=%u, socket=%u, size=%u)",
+ name, num_elements, op_type_str, cache_size, socket_id,
+ get_bbdev_op_size(type));
+
+ priv = (struct rte_bbdev_op_pool_private *)rte_mempool_get_priv(mp);
+ priv->type = type;
+
+ return mp;
+}
+
+int
+rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
+ rte_bbdev_cb_fn cb_fn, void *cb_arg)
+{
+ struct rte_bbdev_callback *user_cb;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ if (event >= RTE_BBDEV_EVENT_MAX) {
+ rte_bbdev_log(ERR,
+ "Invalid event type (%u), should be less than %u",
+ event, RTE_BBDEV_EVENT_MAX);
+ return -EINVAL;
+ }
+
+ if (cb_fn == NULL) {
+ rte_bbdev_log(ERR, "NULL callback function");
+ return -EINVAL;
+ }
+
+ rte_spinlock_lock(&rte_bbdev_cb_lock);
+
+ TAILQ_FOREACH(user_cb, &(dev->list_cbs), next) {
+ if (user_cb->cb_fn == cb_fn &&
+ user_cb->cb_arg == cb_arg &&
+ user_cb->event == event)
+ break;
+ }
+
+ /* create a new callback. */
+ if (user_cb == NULL) {
+ user_cb = rte_zmalloc("INTR_USER_CALLBACK",
+ sizeof(struct rte_bbdev_callback), 0);
+ if (user_cb != NULL) {
+ user_cb->cb_fn = cb_fn;
+ user_cb->cb_arg = cb_arg;
+ user_cb->event = event;
+ TAILQ_INSERT_TAIL(&(dev->list_cbs), user_cb, next);
+ }
+ }
+
+ rte_spinlock_unlock(&rte_bbdev_cb_lock);
+ return (user_cb == NULL) ? -ENOMEM : 0;
+}
+
+int
+rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
+ rte_bbdev_cb_fn cb_fn, void *cb_arg)
+{
+ int ret = 0;
+ struct rte_bbdev_callback *cb, *next;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+
+ if (event >= RTE_BBDEV_EVENT_MAX) {
+ rte_bbdev_log(ERR,
+ "Invalid event type (%u), should be less than %u",
+ event, RTE_BBDEV_EVENT_MAX);
+ return -EINVAL;
+ }
+
+ if (cb_fn == NULL) {
+ rte_bbdev_log(ERR,
+ "NULL callback function cannot be unregistered");
+ return -EINVAL;
+ }
+
+ dev = &rte_bbdev_devices[dev_id];
+ rte_spinlock_lock(&rte_bbdev_cb_lock);
+
+ for (cb = TAILQ_FIRST(&dev->list_cbs); cb != NULL; cb = next) {
+
+ next = TAILQ_NEXT(cb, next);
+
+ if (cb->cb_fn != cb_fn || cb->event != event ||
+ (cb_arg != (void *)-1 && cb->cb_arg != cb_arg))
+ continue;
+
+ /* If this callback is not executing right now, remove it. */
+ if (cb->active == 0) {
+ TAILQ_REMOVE(&(dev->list_cbs), cb, next);
+ rte_free(cb);
+ } else
+ ret = -EAGAIN;
+ }
+
+ rte_spinlock_unlock(&rte_bbdev_cb_lock);
+ return ret;
+}
+
+void
+rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
+ enum rte_bbdev_event_type event, void *ret_param)
+{
+ struct rte_bbdev_callback *cb_lst;
+ struct rte_bbdev_callback dev_cb;
+
+ if (dev == NULL) {
+ rte_bbdev_log(ERR, "NULL device");
+ return;
+ }
+
+ if (dev->data == NULL) {
+ rte_bbdev_log(ERR, "NULL data structure");
+ return;
+ }
+
+ if (event >= RTE_BBDEV_EVENT_MAX) {
+ rte_bbdev_log(ERR,
+ "Invalid event type (%u), should be less than %u",
+ event, RTE_BBDEV_EVENT_MAX);
+ return;
+ }
+
+ rte_spinlock_lock(&rte_bbdev_cb_lock);
+ TAILQ_FOREACH(cb_lst, &(dev->list_cbs), next) {
+ if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+ continue;
+ dev_cb = *cb_lst;
+ cb_lst->active = 1;
+ if (ret_param != NULL)
+ dev_cb.ret_param = ret_param;
+
+ rte_spinlock_unlock(&rte_bbdev_cb_lock);
+ dev_cb.cb_fn(dev->data->dev_id, dev_cb.event,
+ dev_cb.cb_arg, dev_cb.ret_param);
+ rte_spinlock_lock(&rte_bbdev_cb_lock);
+ cb_lst->active = 0;
+ }
+ rte_spinlock_unlock(&rte_bbdev_cb_lock);
+}
+
+int
+rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_intr_enable, dev_id);
+ return dev->dev_ops->queue_intr_enable(dev, queue_id);
+}
+
+int
+rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id)
+{
+ struct rte_bbdev *dev = get_dev(dev_id);
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+ VALID_DEV_OPS_OR_RET_ERR(dev, dev_id);
+ VALID_FUNC_OR_RET_ERR(dev->dev_ops->queue_intr_disable, dev_id);
+ return dev->dev_ops->queue_intr_disable(dev, queue_id);
+}
+
+int
+rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
+ void *data)
+{
+ uint32_t vec;
+ struct rte_bbdev *dev = get_dev(dev_id);
+ struct rte_intr_handle *intr_handle;
+ int ret;
+
+ VALID_DEV_OR_RET_ERR(dev, dev_id);
+ VALID_QUEUE_OR_RET_ERR(queue_id, dev);
+
+ intr_handle = dev->intr_handle;
+ if (!intr_handle || !intr_handle->intr_vec) {
+ rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
+ return -ENOTSUP;
+ }
+
+ if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) {
+ rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n",
+ dev_id, queue_id);
+ return -ENOTSUP;
+ }
+
+ vec = intr_handle->intr_vec[queue_id];
+ ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
+ if (ret && (ret != -EEXIST)) {
+ rte_bbdev_log(ERR,
+ "dev %u q %u int ctl error op %d epfd %d vec %u\n",
+ dev_id, queue_id, op, epfd, vec);
+ return ret;
+ }
+
+ return 0;
+}
+
+
+const char *
+rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type)
+{
+ static const char * const op_types[] = {
+ "RTE_BBDEV_OP_NONE",
+ "RTE_BBDEV_OP_TURBO_DEC",
+ "RTE_BBDEV_OP_TURBO_ENC",
+ };
+
+ if (op_type < RTE_BBDEV_OP_TYPE_COUNT)
+ return op_types[op_type];
+
+ rte_bbdev_log(ERR, "Invalid operation type");
+ return NULL;
+}
+
+
+int bbdev_logtype;
+
+RTE_INIT(rte_bbdev_init_log);
+static void
+rte_bbdev_init_log(void)
+{
+ bbdev_logtype = rte_log_register("lib.bbdev");
+ if (bbdev_logtype >= 0)
+ rte_log_set_level(bbdev_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_bbdev/rte_bbdev.h b/lib/librte_bbdev/rte_bbdev.h
new file mode 100644
index 0000000..37a0d05
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev.h
@@ -0,0 +1,715 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _RTE_BBDEV_H_
+#define _RTE_BBDEV_H_
+
+/**
+ * @file rte_bbdev.h
+ *
+ * Wireless base band device abstraction APIs.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API allows an application to discover, configure and use a device to
+ * process operations. An asynchronous API (enqueue, followed by later dequeue)
+ * is used for processing operations.
+ *
+ * The functions in this API are not thread-safe when called on the same
+ * target object (a device, or a queue on a device), with the exception that
+ * one thread can enqueue operations to a queue while another thread dequeues
+ * from the same queue.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_bus.h>
+#include <rte_cpuflags.h>
+#include <rte_memory.h>
+
+#include "rte_bbdev_op.h"
+
+#ifndef RTE_BBDEV_MAX_DEVS
+#define RTE_BBDEV_MAX_DEVS 128 /**< Max number of devices */
+#endif
+
+/** Flags indiciate current state of BBDEV device */
+enum rte_bbdev_state {
+ RTE_BBDEV_UNUSED,
+ RTE_BBDEV_INITIALIZED
+};
+
+/**
+ * Get the total number of devices that have been successfully initialised.
+ *
+ * @return
+ * The total number of usable devices.
+ */
+uint16_t
+rte_bbdev_count(void);
+
+/**
+ * Check if a device is valid.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @return
+ * true if device ID is valid and device is attached, false otherwise.
+ */
+bool
+rte_bbdev_is_valid(uint16_t dev_id);
+
+/**
+ * Get the next enabled device.
+ *
+ * @param dev_id
+ * The current device
+ *
+ * @return
+ * - The next device, or
+ * - RTE_BBDEV_MAX_DEVS if none found
+ */
+uint16_t
+rte_bbdev_find_next(uint16_t dev_id);
+
+/** Iterate through all enabled devices */
+#define RTE_BBDEV_FOREACH(i) for (i = rte_bbdev_find_next(-1); \
+ i < RTE_BBDEV_MAX_DEVS; \
+ i = rte_bbdev_find_next(i))
+
+/**
+ * Setup up device queues.
+ * This function must be called on a device before setting up the queues and
+ * starting the device. It can also be called when a device is in the stopped
+ * state. If any device queues have been configured their configuration will be
+ * cleared by a call to this function.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param num_queues
+ * Number of queues to configure on device.
+ * @param socket_id
+ * ID of a socket which will be used to allocate memory.
+ *
+ * @return
+ * - 0 on success
+ * - -ENODEV if dev_id is invalid or the device is corrupted
+ * - -EINVAL if num_queues is invalid, 0 or greater than maximum
+ * - -EBUSY if the identified device has already started
+ * - -ENOMEM if unable to allocate memory
+ */
+int
+rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id);
+
+/**
+ * Enable interrupts.
+ * This function may be called before starting the device to enable the
+ * interrupts if they are available.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @return
+ * - 0 on success
+ * - -ENODEV if dev_id is invalid or the device is corrupted
+ * - -EBUSY if the identified device has already started
+ * - -ENOTSUP if the interrupts are not supported by the device
+ */
+int
+rte_bbdev_intr_enable(uint16_t dev_id);
+
+/** Device queue configuration structure */
+struct rte_bbdev_queue_conf {
+ int socket; /**< NUMA socket used for memory allocation */
+ uint32_t queue_size; /**< Size of queue */
+ uint8_t priority; /**< Queue priority */
+ bool deferred_start; /**< Do not start queue when device is started. */
+ enum rte_bbdev_op_type op_type; /**< Operation type */
+};
+
+/**
+ * Configure a queue on a device.
+ * This function can be called after device configuration, and before starting.
+ * It can also be called when the device or the queue is in the stopped state.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ * @param conf
+ * The queue configuration. If NULL, a default configuration will be used.
+ *
+ * @return
+ * - 0 on success
+ * - EINVAL if the identified queue size or priority are invalid
+ * - EBUSY if the identified queue or its device have already started
+ */
+int
+rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
+ const struct rte_bbdev_queue_conf *conf);
+
+/**
+ * Start a device.
+ * This is the last step needed before enqueueing operations is possible.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @return
+ * - 0 on success
+ * - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_start(uint16_t dev_id);
+
+/**
+ * Stop a device.
+ * The device can be reconfigured, and restarted after being stopped.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @return
+ * - 0 on success
+ */
+int
+rte_bbdev_stop(uint16_t dev_id);
+
+/**
+ * Close a device.
+ * The device cannot be restarted without reconfiguration!
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @return
+ * - 0 on success
+ */
+int
+rte_bbdev_close(uint16_t dev_id);
+
+/**
+ * Start a specified queue on a device.
+ * This is only needed if the queue has been stopped, or if the deferred_start
+ * flag has been set when configuring the queue.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ *
+ * @return
+ * - 0 on success
+ * - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
+
+/**
+ * Stop a specified queue on a device, to allow re configuration.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ *
+ * @return
+ * - 0 on success
+ * - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
+
+/** Device statistics. */
+struct rte_bbdev_stats {
+ uint64_t enqueued_count; /**< Count of all operations enqueued */
+ uint64_t dequeued_count; /**< Count of all operations dequeued */
+ /** Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /** Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+/**
+ * Retrieve the general I/O statistics of a device.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param stats
+ * Pointer to structure to where statistics will be copied. On error, this
+ * location may or may not have been modified.
+ *
+ * @return
+ * - 0 on success
+ * - EINVAL if invalid parameter pointer is provided
+ */
+int
+rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats);
+
+/**
+ * Reset the statistics of a device.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @return
+ * - 0 on success
+ */
+int
+rte_bbdev_stats_reset(uint16_t dev_id);
+
+/** Device information supplied by the device's driver */
+struct rte_bbdev_driver_info {
+ /** Driver name */
+ const char *driver_name;
+
+ /** Maximum number of queues supported by the device */
+ unsigned int max_num_queues;
+ /** Queue size limit (queue size must also be power of 2) */
+ uint32_t queue_size_lim;
+ /** Set if device off-loads operation to hardware */
+ bool hardware_accelerated;
+ /** Max value supported by queue priority */
+ uint8_t max_queue_priority;
+ /** Set if device supports per-queue interrupts */
+ bool queue_intr_supported;
+ /** Minimum alignment of buffers, in bytes */
+ uint16_t min_alignment;
+ /** Default queue configuration used if none is supplied */
+ struct rte_bbdev_queue_conf default_queue_conf;
+ /** Device operation capabilities */
+ const struct rte_bbdev_op_cap *capabilities;
+ /** Device cpu_flag requirements */
+ const enum rte_cpu_flag_t *cpu_flag_reqs;
+};
+
+/** Macro used at end of bbdev PMD list */
+#define RTE_BBDEV_END_OF_CAPABILITIES_LIST() \
+ { RTE_BBDEV_OP_NONE }
+
+/**
+ * Device information structure used by an application to discover a devices
+ * capabilities and current configuration
+ */
+struct rte_bbdev_info {
+ int socket_id; /**< NUMA socket that device is on */
+ const char *dev_name; /**< Unique device name */
+ const struct rte_bus *bus; /**< Bus information */
+ uint16_t num_queues; /**< Number of queues currently configured */
+ bool started; /**< Set if device is currently started */
+ struct rte_bbdev_driver_info drv; /**< Info from device driver */
+};
+
+/**
+ * Retrieve information about a device.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param dev_info
+ * Pointer to structure to where information will be copied. On error, this
+ * location may or may not have been modified.
+ *
+ * @return
+ * - 0 on success
+ * - EINVAL if invalid parameter pointer is provided
+ */
+int
+rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info);
+
+/** Queue information */
+struct rte_bbdev_queue_info {
+ /** Current device configuration */
+ struct rte_bbdev_queue_conf conf;
+ /** Set if queue is currently started */
+ bool started;
+};
+
+/**
+ * Retrieve information about a specific queue on a device.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ * @param queue_info
+ * Pointer to structure to where information will be copied. On error, this
+ * location may or may not have been modified.
+ *
+ * @return
+ * - 0 on success
+ * - EINVAL if invalid parameter pointer is provided
+ */
+int
+rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_queue_info *queue_info);
+
+/** @internal The data structure associated with each queue of a device. */
+struct rte_bbdev_queue_data {
+ void *queue_private; /**< Driver-specific per-queue data */
+ struct rte_bbdev_queue_conf conf; /**< Current configuration */
+ struct rte_bbdev_stats queue_stats; /**< Queue statistics */
+ bool started; /**< Queue state */
+};
+
+/** @internal Enqueue encode operations for processing on queue of a device. */
+typedef uint16_t (*rte_bbdev_enqueue_enc_ops_t)(
+ struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_enc_op **ops,
+ uint16_t num);
+
+/** @internal Enqueue decode operations for processing on queue of a device. */
+typedef uint16_t (*rte_bbdev_enqueue_dec_ops_t)(
+ struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_dec_op **ops,
+ uint16_t num);
+
+/** @internal Dequeue encode operations from a queue of a device. */
+typedef uint16_t (*rte_bbdev_dequeue_enc_ops_t)(
+ struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_enc_op **ops, uint16_t num);
+
+/** @internal Dequeue decode operations from a queue of a device. */
+typedef uint16_t (*rte_bbdev_dequeue_dec_ops_t)(
+ struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_dec_op **ops, uint16_t num);
+
+#define RTE_BBDEV_NAME_MAX_LEN 64 /**< Max length of device name */
+
+/**
+ * @internal The data associated with a device, with no function pointers.
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration. Drivers can access
+ * these fields, but should never write to them!
+ */
+struct rte_bbdev_data {
+ char name[RTE_BBDEV_NAME_MAX_LEN]; /**< Unique identifier name */
+ void *dev_private; /**< Driver-specific private data */
+ uint16_t num_queues; /**< Number of currently configured queues */
+ struct rte_bbdev_queue_data *queues; /**< Queue structures */
+ uint16_t dev_id; /**< Device ID */
+ int socket_id; /**< NUMA socket that device is on */
+ bool started; /**< Device run-time state */
+ /** Counter of processes using the device */
+ rte_atomic16_t process_cnt;
+};
+
+/* Forward declarations */
+struct rte_bbdev_ops;
+struct rte_bbdev_callback;
+struct rte_intr_handle;
+
+/** Structure to keep track of registered callbacks */
+TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+
+/**
+ * @internal The data structure associated with a device. Drivers can access
+ * these fields, but should only write to the *_ops fields.
+ */
+struct __rte_cache_aligned rte_bbdev {
+ /**< Enqueue encode function */
+ rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
+ /**< Enqueue decode function */
+ rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
+ /**< Dequeue encode function */
+ rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
+ /**< Dequeue decode function */
+ rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
+ const struct rte_bbdev_ops *dev_ops; /**< Functions exported by PMD */
+ struct rte_bbdev_data *data; /**< Pointer to device data */
+ enum rte_bbdev_state state; /**< If device is currently used or not */
+ struct rte_device *device; /**< Backing device */
+ /** User application callback for interrupts if present */
+ struct rte_bbdev_cb_list list_cbs;
+ struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+};
+
+/** @internal array of all devices */
+extern struct rte_bbdev rte_bbdev_devices[];
+
+/**
+ * Enqueue a burst of processed encode operations to a queue of the device.
+ * This functions only enqueues as many operations as currently possible and
+ * does not block until @p num_ops entries in the queue are available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ * @param ops
+ * Pointer array containing operations to be enqueued Must have at least
+ * @p num_ops entries
+ * @param num_ops
+ * The maximum number of operations to enqueue.
+ *
+ * @return
+ * The number of operations actually enqueued (this is the number of processed
+ * entries in the @p ops array).
+ */
+static inline uint16_t
+rte_bbdev_enqueue_enc_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+{
+ struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+ struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+ uint16_t n = dev->enqueue_enc_ops(q_data, ops, num_ops);
+
+ rte_bbdev_log_verbose("%u encode ops enqueued to dev%u,q%u.\n",
+ num_ops, dev_id, queue_id);
+
+ return n;
+}
+
+/**
+ * Enqueue a burst of processed decode operations to a queue of the device.
+ * This functions only enqueues as many operations as currently possible and
+ * does not block until @p num_ops entries in the queue are available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ * @param ops
+ * Pointer array containing operations to be enqueued Must have at least
+ * @p num_ops entries
+ * @param num_ops
+ * The maximum number of operations to enqueue.
+ *
+ * @return
+ * The number of operations actually enqueued (this is the number of processed
+ * entries in the @p ops array).
+ */
+static inline uint16_t
+rte_bbdev_enqueue_dec_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+{
+ struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+ struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+ uint16_t n = dev->enqueue_dec_ops(q_data, ops, num_ops);
+
+ rte_bbdev_log_verbose("%u decode ops enqueued to dev%u,q%u.\n",
+ num_ops, dev_id, queue_id);
+
+ return n;
+}
+
+/**
+ * Dequeue a burst of processed encode operations from a queue of the device.
+ * This functions returns only the current contents of the queue, and does not
+ * block until @ num_ops is available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ * @param ops
+ * Pointer array where operations will be dequeued to. Must have at least
+ * @p num_ops entries
+ * @param num_ops
+ * The maximum number of operations to dequeue.
+ *
+ * @return
+ * The number of operations actually dequeued (this is the number of entries
+ * copied into the @p ops array).
+ */
+static inline uint16_t
+rte_bbdev_dequeue_enc_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+{
+ struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+ struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+ uint16_t n = dev->dequeue_enc_ops(q_data, ops, num_ops);
+
+ rte_bbdev_log_verbose("%u encode ops dequeued to dev%u,q%u\n",
+ n, dev_id, queue_id);
+
+ return n;
+}
+
+/**
+ * Dequeue a burst of processed decode operations from a queue of the device.
+ * This functions returns only the current contents of the queue, and does not
+ * block until @ num_ops is available.
+ * This function does not provide any error notification to avoid the
+ * corresponding overhead.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param queue_id
+ * The index of the queue.
+ * @param ops
+ * Pointer array where operations will be dequeued to. Must have at least
+ * @p num_ops entries
+ * @param num_ops
+ * The maximum number of operations to dequeue.
+ *
+ * @return
+ * The number of operations actually dequeued (this is the number of entries
+ * copied into the @p ops array).
+ */
+
+static inline uint16_t
+rte_bbdev_dequeue_dec_ops(uint16_t dev_id, uint16_t queue_id,
+ struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+{
+ struct rte_bbdev *dev = &rte_bbdev_devices[dev_id];
+ struct rte_bbdev_queue_data *q_data = &dev->data->queues[queue_id];
+ uint16_t n = dev->dequeue_dec_ops(q_data, ops, num_ops);
+
+ rte_bbdev_log_verbose("%u decode ops dequeued to dev%u,q%u\n",
+ n, dev_id, queue_id);
+
+ return n;
+}
+
+/** Definitions of device event types */
+enum rte_bbdev_event_type {
+ RTE_BBDEV_EVENT_UNKNOWN, /**< unknown event type */
+ RTE_BBDEV_EVENT_ERROR, /**< error interrupt event */
+ RTE_BBDEV_EVENT_DEQUEUE, /**< dequeue event */
+ RTE_BBDEV_EVENT_MAX /**< max value of this enum */
+};
+
+/**
+ * Typedef for application callback function registered by application
+ * software for notification of device events
+ *
+ * @param dev_id
+ * Device identifier
+ * @param event
+ * Device event to register for notification of.
+ * @param cb_arg
+ * User specified parameter to be passed to user's callback function.
+ * @param ret_param
+ * To pass data back to user application.
+ */
+typedef void (*rte_bbdev_cb_fn)(uint16_t dev_id,
+ enum rte_bbdev_event_type event, void *cb_arg,
+ void *ret_param);
+
+/**
+ * Register a callback function for specific device id. Multiple callbacks can
+ * be added and will be called in the order they are added when an event is
+ * triggered. Callbacks are called in a separate thread created by the DPDK EAL.
+ *
+ * @param dev_id
+ * Device id.
+ * @param event
+ * The event that the callback will be registered for.
+ * @param cb_fn
+ * User supplied callback function to be called.
+ * @param cb_arg
+ * Pointer to parameter that will be passed to the callback.
+ *
+ * @return
+ * Zero on success, negative value on failure.
+ */
+int
+rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
+ rte_bbdev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Unregister a callback function for specific device id.
+ *
+ * @param dev_id
+ * The device identifier.
+ * @param event
+ * The event that the callback will be unregistered for.
+ * @param cb_fn
+ * User supplied callback function to be unregistered.
+ * @param cb_arg
+ * Pointer to the parameter supplied when registering the callback.
+ * (void *)-1 means to remove all registered callbacks with the specified
+ * function address.
+ *
+ * @return
+ * - 0 on success
+ * - EINVAL if invalid parameter pointer is provided
+ * - EAGAIN if the provided callback pointer does not exist
+ */
+int
+rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
+ rte_bbdev_cb_fn cb_fn, void *cb_arg);
+
+/**
+ * Enable a one-shot interrupt on the next operation enqueued to a particular
+ * queue. The interrupt will be triggered when the operation is ready to be
+ * dequeued. To handle the interrupt, an epoll file descriptor must be
+ * registered using rte_bbdev_queue_intr_ctl(), and then an application
+ * thread/lcore can wait for the interrupt using rte_epoll_wait().
+ *
+ * @param dev_id
+ * The device identifier.
+ * @param queue_id
+ * The index of the queue.
+ *
+ * @return
+ * - 0 on success
+ * - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
+
+/**
+ * Disable a one-shot interrupt on the next operation enqueued to a particular
+ * queue (if it has been enabled).
+ *
+ * @param dev_id
+ * The device identifier.
+ * @param queue_id
+ * The index of the queue.
+ *
+ * @return
+ * - 0 on success
+ * - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
+
+/**
+ * Control interface for per-queue interrupts.
+ *
+ * @param dev_id
+ * The device identifier.
+ * @param queue_id
+ * The index of the queue.
+ * @param epfd
+ * Epoll file descriptor that will be associated with the interrupt source.
+ * If the special value RTE_EPOLL_PER_THREAD is provided, a per thread epoll
+ * file descriptor created by the EAL is used (RTE_EPOLL_PER_THREAD can also
+ * be used when calling rte_epoll_wait()).
+ * @param op
+ * The operation be performed for the vector.RTE_INTR_EVENT_ADD or
+ * RTE_INTR_EVENT_DEL.
+ * @param data
+ * User context, that will be returned in the epdata.data field of the
+ * rte_epoll_event structure filled in by rte_epoll_wait().
+ *
+ * @return
+ * - 0 on success
+ * - ENOTSUP if interrupts are not supported by the identified device
+ * - negative value on failure - as returned from PMD driver
+ */
+int
+rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
+ void *data);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_op.h b/lib/librte_bbdev/rte_bbdev_op.h
new file mode 100644
index 0000000..c0c7d73
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_op.h
@@ -0,0 +1,638 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _RTE_BBDEV_OP_H_
+#define _RTE_BBDEV_OP_H_
+
+/**
+ * @file rte_bbdev_op.h
+ *
+ * Defines wireless base band layer 1 operations and capabilities
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+#define RTE_BBDEV_MAX_CODE_BLOCKS 64
+
+extern int bbdev_logtype;
+
+/**
+ * Helper macro for logging
+ *
+ * @param level
+ * Log level: EMERG, ALERT, CRIT, ERR, WARNING, NOTICE, INFO, or DEBUG
+ * @param fmt
+ * The format string, as in printf(3).
+ * @param ...
+ * The variable arguments required by the format string.
+ *
+ * @return
+ * - 0 on success
+ * - Negative on error
+ */
+#define rte_bbdev_log(level, fmt, ...) \
+ rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__)
+
+/**
+ * Helper macro for debug logging with extra source info
+ *
+ * @param fmt
+ * The format string, as in printf(3).
+ * @param ...
+ * The variable arguments required by the format string.
+ *
+ * @return
+ * - 0 on success
+ * - Negative on error
+ */
+#define rte_bbdev_log_debug(fmt, ...) \
+ rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \
+ ##__VA_ARGS__)
+
+/**
+ * Helper macro for extra conditional logging from datapath
+ *
+ * @param fmt
+ * The format string, as in printf(3).
+ * @param ...
+ * The variable arguments required by the format string.
+ *
+ * @return
+ * - 0 on success
+ * - Negative on error
+ */
+#define rte_bbdev_log_verbose(fmt, ...) \
+ (void)((RTE_LOG_DEBUG <= RTE_LOG_DP_LEVEL) ? \
+ rte_log(RTE_LOG_DEBUG, \
+ bbdev_logtype, ": " fmt "\n", ##__VA_ARGS__) : 0)
+
+/** Flags for turbo decoder operation and capability structure */
+enum rte_bbdev_op_td_flag_bitmasks {
+ /**< If sub block de-interleaving is to be performed. */
+ RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE = (1ULL << 0),
+ /**< To use CRC Type 24B (otherwise use CRC Type 24A). */
+ RTE_BBDEV_TURBO_CRC_TYPE_24B = (1ULL << 1),
+ /**< If turbo equalization is to be performed. */
+ RTE_BBDEV_TURBO_EQUALIZER = (1ULL << 2),
+ /**< If set, saturate soft output to +/-127 */
+ RTE_BBDEV_TURBO_SOFT_OUT_SATURATE = (1ULL << 3),
+ /**< Set to 1 to start iteration from even, else odd; one iteration =
+ * max_iteration + 0.5
+ */
+ RTE_BBDEV_TURBO_HALF_ITERATION_EVEN = (1ULL << 4),
+ /**< If 0, TD stops after CRC matches; else if 1, runs to end of next
+ * odd iteration after CRC matches
+ */
+ RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH = (1ULL << 5),
+ /**< Set if soft output is required to be output */
+ RTE_BBDEV_TURBO_SOFT_OUTPUT = (1ULL << 6),
+ /**< Set to enable early termination mode */
+ RTE_BBDEV_TURBO_EARLY_TERMINATION = (1ULL << 7),
+ /**< Set if a device supports decoder dequeue interrupts */
+ RTE_BBDEV_TURBO_DEC_INTERRUPTS = (1ULL << 9),
+ /**< Set if positive LLR encoded input is supported. Positive LLR value
+ * represents the level of confidence for bit '1', and vice versa for
+ * bit '0'.
+ * This is mutually exclusive with RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN
+ * when used to formalize the input data format.
+ */
+ RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN = (1ULL << 10),
+ /**< Set if negative LLR encoded input is supported. Negative LLR value
+ * represents the level of confidence for bit '1', and vice versa for
+ * bit '0'.
+ * This is mutually exclusive with RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN
+ * when used to formalize the input data format.
+ */
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN = (1ULL << 11),
+ /**< Set if positive LLR soft output is supported. Positive LLR value
+ * represents the level of confidence for bit '1', and vice versa for
+ * bit '0'.
+ * This is mutually exclusive with
+ * RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT when used to formalize
+ * the input data format.
+ */
+ RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT = (1ULL << 12),
+ /**< Set if negative LLR soft output is supported. Negative LLR value
+ * represents the level of confidence for bit '1', and vice versa for
+ * bit '0'.
+ * This is mutually exclusive with
+ * RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT when used to formalize the
+ * input data format.
+ */
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT = (1ULL << 13),
+ /**< Set if driver supports flexible parallel MAP engine decoding. If
+ * not supported, num_maps (number of MAP engines) argument is unusable.
+ */
+ RTE_BBDEV_TURBO_MAP_DEC = (1ULL << 14),
+ /**< Set if a device supports scatter-gather functionality */
+ RTE_BBDEV_TURBO_DEC_SCATTER_GATHER = (1ULL << 15)
+};
+
+/** Flags for turbo encoder operation and capability structure */
+enum rte_bbdev_op_te_flag_bitmasks {
+ /**< Ignore rv_index and set K0 = 0 */
+ RTE_BBDEV_TURBO_RV_INDEX_BYPASS = (1ULL << 0),
+ /**< If rate matching is to be performed */
+ RTE_BBDEV_TURBO_RATE_MATCH = (1ULL << 1),
+ /**< This bit must be set to enable CRC-24B generation */
+ RTE_BBDEV_TURBO_CRC_24B_ATTACH = (1ULL << 2),
+ /**< This bit must be set to enable CRC-24A generation */
+ RTE_BBDEV_TURBO_CRC_24A_ATTACH = (1ULL << 3),
+ /**< Set if a device supports encoder dequeue interrupts */
+ RTE_BBDEV_TURBO_ENC_INTERRUPTS = (1ULL << 4),
+ /**< Set if a device supports scatter-gather functionality */
+ RTE_BBDEV_TURBO_ENC_SCATTER_GATHER = (1ULL << 5)
+};
+
+/**< Data input and output buffer for BBDEV operations */
+struct rte_bbdev_op_data {
+ /**< The mbuf data structure representing the data for BBDEV operation.
+ *
+ * This mbuf pointer can point to one Code Block (CB) data buffer or
+ * multiple CBs contiguously located next to each other.
+ * A Transport Block (TB) represents a whole piece of data that is
+ * divided into one or more CBs. Maximum number of CBs can be contained
+ * in one TB is defined by RTE_BBDEV_MAX_CODE_BLOCKS.
+ *
+ * An mbuf data structure cannot represent more than one TB. The
+ * smallest piece of data that can be contained in one mbuf is one CB.
+ * An mbuf can include one contiguous CB, subset of contiguous CBs that
+ * are belonging to one TB, or all contiguous CBs that are belonging to
+ * one TB.
+ *
+ * If a BBDEV PMD supports the extended capability "Scatter-Gather",
+ * then it is capable of collecting (gathering) non-contiguous
+ * (scattered) data from multiple locations in the memory.
+ * This capability is reported by the capability flags:
+ * - RTE_BBDEV_TURBO_ENC_SCATTER_GATHER and
+ * - RTE_BBDEV_TURBO_DEC_SCATTER_GATHER.
+ * Only if a BBDEV PMD supports this feature, chained mbuf data
+ * structures are accepted. A chained mbuf can represent one
+ * non-contiguous CB or multiple non-contiguous CBs.
+ * If BBDEV PMD does not support this feature, it will assume inbound
+ * mbuf data contains one segment.
+ *
+ * The output mbuf data though is always one segment, even if the input
+ * was a chained mbuf.
+ */
+ struct rte_mbuf *data;
+ /**< The starting point of the BBDEV (encode/decode) operation,
+ * in bytes.
+ *
+ * BBDEV starts to read data past this offset.
+ * In case of chained mbuf, this offset applies only to the first mbuf
+ * segment.
+ */
+ uint32_t offset;
+ /**< The total data length to be processed in one operation, in bytes.
+ *
+ * In case the mbuf data is representing one CB, this is the length of
+ * the CB undergoing the operation.
+ * If it's for multiple CBs, this is the total length of those CBs
+ * undergoing the operation.
+ * If it's for one TB, this is the total length of the TB under
+ * operation.
+ *
+ * In case of chained mbuf, this data length includes the lengths of the
+ * "scattered" data segments undergoing the operation.
+ */
+ uint32_t length;
+};
+
+struct rte_bbdev_op_dec_cb_params {
+ /**< The K size of the input CB, in bits [40:6144], as specified in
+ * 3GPP TS 36.212.
+ * This size is inclusive of CRC bits, regardless whether it was
+ * pre-calculated by the application or not.
+ */
+ uint16_t k;
+ /**< The E length of the CB rate matched LLR output, in bytes, as in
+ * 3GPP TS 36.212.
+ */
+ uint32_t e;
+};
+
+struct rte_bbdev_op_dec_tb_params {
+ /**< The K- size of the input CB, in bits [40:6144], that is in the
+ * Turbo operation when r < C-, as in 3GPP TS 36.212.
+ */
+ uint16_t k_neg;
+ /**< The K+ size of the input CB, in bits [40:6144], that is in the
+ * Turbo operation when r >= C-, as in 3GPP TS 36.212.
+ */
+ uint16_t k_pos;
+ /**< The number of CBs that have K- size, [0:63] */
+ uint8_t c_neg;
+ /**< The total number of CBs in the TB, [1:RTE_BBDEV_MAX_CODE_BLOCKS] */
+ uint8_t c;
+ /**< The number of CBs that uses Ea before switching to Eb, [0:63] */
+ uint8_t cab;
+ /**< The E size of the CB rate matched output to use in the Turbo
+ * operation when r < cab
+ */
+ uint32_t ea;
+ /**< The E size of the CB rate matched output to use in the Turbo
+ * operation when r >= cab
+ */
+ uint32_t eb;
+};
+
+/**< Operation structure for Turbo decode.
+ * An operation can perform on one CB at a time "CB-mode".
+ * An operation can perform on one or multiple CBs that are logically belonging
+ * to one TB "TB-mode".
+ * The provided K size parameter of the CB is its size out coming from the
+ * decode operation.
+ * CRC24A/B check is requested by the application by setting the flag
+ * RTE_BBDEV_TURBO_CRC_TYPE_24B for CRC24B check or CRC24A otherwise.
+ * In TB-mode, BBDEV concatenates the decoded CBs one next to the other with
+ * relevant CRC24B in between.
+ *
+ * The input encoded CB data is the Virtual Circular Buffer data stream, wk,
+ * with the null padding included as described in 3GPP TS 36.212
+ * section 5.1.4.1.2 and shown in 3GPP TS 36.212 section 5.1.4.1 Figure 5.1.4-1.
+ * The size of the virtual circular buffer is 3*Kpi, where Kpi is the 32 byte
+ * aligned value of K, as specified in 3GPP TS 36.212 section 5.1.4.1.1.
+ *
+ * Each byte in the input circular buffer is the LLR value of each bit of the
+ * original CB.
+ *
+ * Hard output is a mandatory capability that all BBDEV PMDs support. This is
+ * the decoded CBs of K sizes (CRC24A/B is the last 24-bit in each decoded CB).
+ * Soft output is an optional capability for BBDEV PMDs. If supported, an LLR
+ * rate matched output is computed in the soft_output buffer structure.
+ *
+ * The output mbuf data structure is expected to be allocated by the
+ * application with enough room for the output data.
+ */
+struct rte_bbdev_op_turbo_dec {
+ /**< The Virtual Circular Buffer, wk, size 3*Kpi for each CB */
+ struct rte_bbdev_op_data input;
+ /**< The hard decisions buffer for the decoded output,
+ * size K for each CB
+ */
+ struct rte_bbdev_op_data hard_output;
+ /**< The soft LLR output buffer - optional */
+ struct rte_bbdev_op_data soft_output;
+
+ uint32_t op_flags; /**< Flags from rte_bbdev_op_td_flag_bitmasks */
+ uint8_t rv_index; /**< Rv index for rate matching [0:3] */
+ /**< The minimum number of iterations to perform in decoding all CBs in
+ * this operation - input
+ */
+ uint8_t iter_min:4;
+ /**< The maximum number of iterations to perform in decoding all CBs in
+ * this operation - input
+ */
+ uint8_t iter_max:4;
+ /**< The maximum number of iterations that were perform in decoding all
+ * CBs in this decode operation - output
+ */
+ uint8_t iter_count;
+ /**< 5 bit extrinsic scale (scale factor on extrinsic info) */
+ uint8_t ext_scale;
+ /**< Number of MAP engines to use in decode,
+ * must be power of 2 (or 0 to auto-select)
+ */
+ uint8_t num_maps;
+
+ uint8_t code_block_mode; /**< [0 - TB : 1 - CB] */
+ union {
+ /**< Struct which stores Code Block specific parameters */
+ struct rte_bbdev_op_dec_cb_params cb_params;
+ /**< Struct which stores Transport Block specific parameters */
+ struct rte_bbdev_op_dec_tb_params tb_params;
+ };
+};
+
+struct rte_bbdev_op_enc_cb_params {
+ /**< The K size of the input CB, in bits [40:6144], as specified in
+ * 3GPP TS 36.212.
+ * This size is inclusive of CRC24A, regardless whether it was
+ * pre-calculated by the application or not.
+ */
+ uint16_t k;
+ /**< The E length of the CB rate matched output, in bits, as in
+ * 3GPP TS 36.212.
+ */
+ uint32_t e;
+ /**< The Ncb soft buffer size of the CB rate matched output [K:3*Kpi],
+ * in bits, as specified in 3GPP TS 36.212.
+ */
+ uint16_t ncb;
+};
+
+struct rte_bbdev_op_enc_tb_params {
+ /**< The K- size of the input CB, in bits [40:6144], that is in the
+ * Turbo operation when r < C-, as in 3GPP TS 36.212.
+ * This size is inclusive of CRC24B, regardless whether it was
+ * pre-calculated and appended by the application or not.
+ */
+ uint16_t k_neg;
+ /**< The K+ size of the input CB, in bits [40:6144], that is in the
+ * Turbo operation when r >= C-, as in 3GPP TS 36.212.
+ * This size is inclusive of CRC24B, regardless whether it was
+ * pre-calculated and appended by the application or not.
+ */
+ uint16_t k_pos;
+ /**< The number of CBs that have K- size, [0:63] */
+ uint8_t c_neg;
+ /**< The total number of CBs in the TB, [1:RTE_BBDEV_MAX_CODE_BLOCKS] */
+ uint8_t c;
+ /**< The number of CBs that uses Ea before switching to Eb, [0:63] */
+ uint8_t cab;
+ /**< The E size of the CB rate matched output to use in the Turbo
+ * operation when r < cab
+ */
+ uint32_t ea;
+ /**< The E size of the CB rate matched output to use in the Turbo
+ * operation when r >= cab
+ */
+ uint32_t eb;
+ /**< The Ncb soft buffer size for the rate matched CB that is used in
+ * the Turbo operation when r < C-, [K:3*Kpi]
+ */
+ uint16_t ncb_neg;
+ /**< The Ncb soft buffer size for the rate matched CB that is used in
+ * the Turbo operation when r >= C-, [K:3*Kpi]
+ */
+ uint16_t ncb_pos;
+ /**< The index of the first CB in the inbound mbuf data, default is 0 */
+ uint8_t r;
+};
+
+/**< Operation structure for Turbo encode.
+ * An operation can perform on one CB at a time "CB-mode".
+ * An operation can perform on one or multiple CBs that are logically
+ * belonging to one TB "TB-mode".
+ *
+ * In CB-mode, CRC24A/B is an optional operation. K size parameter is not
+ * affected by CRC24A/B inclusion, this only affects the inbound mbuf data
+ * length. Not all BBDEV PMDs are capable of CRC24A/B calculation. Flags
+ * RTE_BBDEV_TURBO_CRC_24A_ATTACH and RTE_BBDEV_TURBO_CRC_24B_ATTACH informs
+ * the application with relevant capability. These flags can be set in the
+ * op_flags parameter to indicate BBDEV to calculate and append CRC24A to CB
+ * before going forward with Turbo encoding.
+ *
+ * In TB-mode, CRC24A is assumed to be pre-calculated and appended to the
+ * inbound TB mbuf data buffer.
+ *
+ * The output mbuf data structure is expected to be allocated by the
+ * application with enough room for the output data.
+ */
+struct rte_bbdev_op_turbo_enc {
+ /**< The input CB or TB data */
+ struct rte_bbdev_op_data input;
+ /**< The rate matched CB or TB output buffer */
+ struct rte_bbdev_op_data output;
+
+ uint32_t op_flags; /**< Flags from rte_bbdev_op_te_flag_bitmasks */
+ uint8_t rv_index; /**< Rv index for rate matching [0:3] */
+
+ uint8_t code_block_mode; /**< [0 - TB : 1 - CB] */
+ union {
+ /**< Struct which stores Code Block specific parameters */
+ struct rte_bbdev_op_enc_cb_params cb_params;
+ /**< Struct which stores Transport Block specific parameters */
+ struct rte_bbdev_op_enc_tb_params tb_params;
+ };
+};
+
+/**< List of the capabilities for the Turbo Decoder */
+struct rte_bbdev_op_cap_turbo_dec {
+ /**< Flags from rte_bbdev_op_td_flag_bitmasks */
+ uint32_t capability_flags;
+ uint8_t num_buffers_src; /**< Num input code block buffers */
+ /**< Num hard output code block buffers */
+ uint8_t num_buffers_hard_out;
+ /**< Num soft output code block buffers if supported by the driver */
+ uint8_t num_buffers_soft_out;
+};
+
+/**< List of the capabilities for the Turbo Encoder */
+struct rte_bbdev_op_cap_turbo_enc {
+ /**< Flags from rte_bbdev_op_te_flag_bitmasks */
+ uint32_t capability_flags;
+ uint8_t num_buffers_src; /**< Num input code block buffers */
+ uint8_t num_buffers_dst; /**< Num output code block buffers */
+};
+
+/** Different operation types supported by the device */
+enum rte_bbdev_op_type {
+ RTE_BBDEV_OP_NONE, /**< Dummy operation that does nothing */
+ RTE_BBDEV_OP_TURBO_DEC, /**< Turbo decode */
+ RTE_BBDEV_OP_TURBO_ENC, /**< Turbo encode */
+ RTE_BBDEV_OP_TYPE_COUNT, /**< Count of different op types */
+};
+
+/**< Bit indexes of possible errors reported through status field */
+enum {
+ RTE_BBDEV_DRV_ERROR,
+ RTE_BBDEV_DATA_ERROR,
+ RTE_BBDEV_CRC_ERROR,
+};
+
+/**< Structure specifying a single encode operation */
+struct rte_bbdev_enc_op {
+ int status; /**< Status of operation that was performed */
+ struct rte_mempool *mempool; /**< Mempool which op instance is in */
+ void *opaque_data; /**< Opaque pointer for user data */
+ /**< Contains encoder specific parameters */
+ struct rte_bbdev_op_turbo_enc turbo_enc;
+};
+
+/**< Structure specifying a single decode operation */
+struct rte_bbdev_dec_op {
+ int status; /**< Status of operation that was performed */
+ struct rte_mempool *mempool; /**< Mempool which op instance is in */
+ void *opaque_data; /**< Opaque pointer for user data */
+ /**< Contains decoder specific parameters */
+ struct rte_bbdev_op_turbo_dec turbo_dec;
+};
+
+/**< Operation capabilities supported by a device */
+struct rte_bbdev_op_cap {
+ enum rte_bbdev_op_type type; /**< Type of operation */
+ union {
+ struct rte_bbdev_op_cap_turbo_dec turbo_dec;
+ struct rte_bbdev_op_cap_turbo_enc turbo_enc;
+ } cap; /**< Operation-type specific capabilities */
+};
+
+/**< @internal Private data structure stored with operation pool. */
+struct rte_bbdev_op_pool_private {
+ enum rte_bbdev_op_type type; /**< Type of operations in a pool */
+};
+
+/**
+ * Converts queue operation type from enum to string
+ *
+ * @param op_type
+ * Operation type as enum
+ *
+ * @returns
+ * Operation type as string or NULL if op_type is invalid
+ *
+ */
+const char*
+rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type);
+
+/**
+ * Creates a bbdev operation mempool
+ *
+ * @param name
+ * Pool name.
+ * @param type
+ * Operation type, use RTE_BBDEV_OP_NONE for a pool which supports all
+ * operation types.
+ * @param num_elements
+ * Number of elements in the pool.
+ * @param cache_size
+ * Number of elements to cache on an lcore, see rte_mempool_create() for
+ * further details about cache size.
+ * @param socket_id
+ * Socket to allocate memory on.
+ *
+ * @return
+ * - Pointer to a mempool on success,
+ * - NULL pointer on failure.
+ */
+struct rte_mempool *
+rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
+ unsigned int num_elements, unsigned int cache_size,
+ int socket_id);
+
+/**
+ * Bulk allocate encode operations from a mempool with parameter defaults reset.
+ *
+ * @param mempool
+ * Operation mempool, created by rte_bbdev_op_pool_create().
+ * @param ops
+ * Output array to place allocated operations
+ * @param num_ops
+ * Number of operations to allocate
+ *
+ * @returns
+ * - 0 on success
+ * - EINVAL if invalid mempool is provided
+ */
+static inline int
+rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
+ struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+{
+ struct rte_bbdev_op_pool_private *priv;
+ int ret;
+
+ /* Check type */
+ priv = (struct rte_bbdev_op_pool_private *)
+ rte_mempool_get_priv(mempool);
+ if (unlikely(priv->type != RTE_BBDEV_OP_TURBO_ENC))
+ return -EINVAL;
+
+ /* Get elements */
+ ret = rte_mempool_get_bulk(mempool, (void **)ops, num_ops);
+ if (unlikely(ret < 0))
+ return ret;
+
+ rte_bbdev_log_verbose("%u encode ops allocated from %s\n",
+ num_ops, mempool->name);
+
+ return 0;
+}
+
+/**
+ * Bulk allocate decode operations from a mempool with parameter defaults reset.
+ *
+ * @param mempool
+ * Operation mempool, created by rte_bbdev_op_pool_create().
+ * @param ops
+ * Output array to place allocated operations
+ * @param num_ops
+ * Number of operations to allocate
+ *
+ * @returns
+ * - 0 on success
+ * - EINVAL if invalid mempool is provided
+ */
+static inline int
+rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
+ struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+{
+ struct rte_bbdev_op_pool_private *priv;
+ int ret;
+
+ /* Check type */
+ priv = (struct rte_bbdev_op_pool_private *)
+ rte_mempool_get_priv(mempool);
+ if (unlikely(priv->type != RTE_BBDEV_OP_TURBO_DEC))
+ return -EINVAL;
+
+ /* Get elements */
+ ret = rte_mempool_get_bulk(mempool, (void **)ops, num_ops);
+ if (unlikely(ret < 0))
+ return ret;
+
+ rte_bbdev_log_verbose("%u encode ops allocated from %s\n",
+ num_ops, mempool->name);
+
+ return 0;
+}
+
+/**
+ * Free decode operation structures that were allocated by
+ * rte_bbdev_dec_op_alloc_bulk().
+ * All structures must belong to the same mempool.
+ *
+ * @param ops
+ * Operation structures
+ * @param num_ops
+ * Number of structures
+ */
+static inline void
+rte_bbdev_dec_op_free_bulk(struct rte_bbdev_dec_op **ops, unsigned int num_ops)
+{
+ if (num_ops > 0) {
+ rte_mempool_put_bulk(ops[0]->mempool, (void **)ops, num_ops);
+ rte_bbdev_log_verbose("%u decode ops freed to %s\n", num_ops,
+ ops[0]->mempool->name);
+ }
+}
+
+/**
+ * Free encode operation structures that were allocated by
+ * rte_bbdev_enc_op_alloc_bulk().
+ * All structures must belong to the same mempool.
+ *
+ * @param ops
+ * Operation structures
+ * @param num_ops
+ * Number of structures
+ */
+static inline void
+rte_bbdev_enc_op_free_bulk(struct rte_bbdev_enc_op **ops, unsigned int num_ops)
+{
+ if (num_ops > 0) {
+ rte_mempool_put_bulk(ops[0]->mempool, (void **)ops, num_ops);
+ rte_bbdev_log_verbose("%u encode ops freed to %s\n", num_ops,
+ ops[0]->mempool->name);
+ }
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_OP_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_pmd.h b/lib/librte_bbdev/rte_bbdev_pmd.h
new file mode 100644
index 0000000..7d1b240
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_pmd.h
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _RTE_BBDEV_PMD_H_
+#define _RTE_BBDEV_PMD_H_
+
+/**
+ * @file rte_bbdev_pmd.h
+ *
+ * Wireless base band driver-facing APIs.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API provides the mechanism for device drivers to register with the
+ * bbdev interface. User applications should not use this API.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_log.h>
+
+#include "rte_bbdev.h"
+
+/** Suggested value for SW based devices */
+#define RTE_BBDEV_DEFAULT_MAX_NB_QUEUES RTE_MAX_LCORE
+
+/** Suggested value for SW based devices */
+#define RTE_BBDEV_QUEUE_SIZE_LIMIT 16384
+
+/**
+ * @internal
+ * Allocates a new slot for a bbdev and returns the pointer to that slot
+ * for the driver to use.
+ *
+ * @param name
+ * Unique identifier name for each bbdev device
+ *
+ * @return
+ * - Slot in the rte_bbdev array for a new device;
+ */
+struct rte_bbdev *
+rte_bbdev_allocate(const char *name);
+
+/**
+ * @internal
+ * Release the specified bbdev.
+ *
+ * @param bbdev
+ * The *bbdev* pointer is the address of the *rte_bbdev* structure.
+ * @return
+ * - 0 on success, negative on error
+ */
+int
+rte_bbdev_release(struct rte_bbdev *bbdev);
+
+/**
+ * Get the device structure for a named device.
+ *
+ * @param name
+ * Name of the device
+ *
+ * @return
+ * - The device structure pointer, or
+ * - NULL otherwise
+ *
+ */
+struct rte_bbdev *
+rte_bbdev_get_named_dev(const char *name);
+
+/**
+ * Definitions of all functions exported by a driver through the the generic
+ * structure of type *rte_bbdev_ops* supplied in the *rte_bbdev* structure
+ * associated with a device.
+ */
+
+/** @internal Function used to configure device memory. */
+typedef int (*rte_bbdev_setup_queues_t)(struct rte_bbdev *dev,
+ uint16_t num_queues, int socket_id);
+
+/** @internal Function used to configure interrupts for a device. */
+typedef int (*rte_bbdev_intr_enable_t)(struct rte_bbdev *dev);
+
+/** @internal Function to allocate and configure a device queue. */
+typedef int (*rte_bbdev_queue_setup_t)(struct rte_bbdev *dev,
+ uint16_t queue_id, const struct rte_bbdev_queue_conf *conf);
+
+/*
+ * @internal
+ * Function to release memory resources allocated for a device queue.
+ */
+typedef int (*rte_bbdev_queue_release_t)(struct rte_bbdev *dev,
+ uint16_t queue_id);
+
+/** @internal Function to start a configured device. */
+typedef int (*rte_bbdev_start_t)(struct rte_bbdev *dev);
+
+/** @internal Function to stop a device. */
+typedef void (*rte_bbdev_stop_t)(struct rte_bbdev *dev);
+
+/** @internal Function to close a device. */
+typedef int (*rte_bbdev_close_t)(struct rte_bbdev *dev);
+
+/** @internal Function to start a device queue. */
+typedef int (*rte_bbdev_queue_start_t)(struct rte_bbdev *dev,
+ uint16_t queue_id);
+
+/** @internal Function to stop a device queue. */
+typedef int (*rte_bbdev_queue_stop_t)(struct rte_bbdev *dev, uint16_t queue_id);
+
+/** @internal Function to read stats from a device. */
+typedef void (*rte_bbdev_stats_get_t)(struct rte_bbdev *dev,
+ struct rte_bbdev_stats *stats);
+
+/** @internal Function to reset stats on a device. */
+typedef void (*rte_bbdev_stats_reset_t)(struct rte_bbdev *dev);
+
+/** @internal Function to retrieve specific information of a device. */
+typedef void (*rte_bbdev_info_get_t)(struct rte_bbdev *dev,
+ struct rte_bbdev_driver_info *dev_info);
+
+/*
+ * @internal
+ * Function to enable interrupt for next op on a queue of a device.
+ */
+typedef int (*rte_bbdev_queue_intr_enable_t)(struct rte_bbdev *dev,
+ uint16_t queue_id);
+
+/*
+ * @internal
+ * Function to disable interrupt for next op on a queue of a device.
+ */
+typedef int (*rte_bbdev_queue_intr_disable_t)(struct rte_bbdev *dev,
+ uint16_t queue_id);
+
+/**
+ * Operations implemented by drivers. Fields marked as "Required" must be
+ * provided by a driver for a device to have basic functionality. "Optional"
+ * fields are for non-vital operations
+ */
+struct rte_bbdev_ops {
+ /**< Allocate and configure device memory. Optional. */
+ rte_bbdev_setup_queues_t setup_queues;
+ /**< Configure interrupts. Optional. */
+ rte_bbdev_intr_enable_t intr_enable;
+ /**< Start device. Optional. */
+ rte_bbdev_start_t start;
+ /**< Stop device. Optional. */
+ rte_bbdev_stop_t stop;
+ /**< Close device. Optional. */
+ rte_bbdev_close_t close;
+
+ /**< Get device info. Required. */
+ rte_bbdev_info_get_t info_get;
+ /** Get device statistics. Optional. */
+ rte_bbdev_stats_get_t stats_get;
+ /** Reset device statistics. Optional. */
+ rte_bbdev_stats_reset_t stats_reset;
+
+ /** Set up a device queue. Required. */
+ rte_bbdev_queue_setup_t queue_setup;
+ /** Release a queue. Required. */
+ rte_bbdev_queue_release_t queue_release;
+ /** Start a queue. Optional. */
+ rte_bbdev_queue_start_t queue_start;
+ /**< Stop a queue pair. Optional. */
+ rte_bbdev_queue_stop_t queue_stop;
+
+ /** Enable queue interrupt. Optional */
+ rte_bbdev_queue_intr_enable_t queue_intr_enable;
+ /** Disable queue interrupt. Optional */
+ rte_bbdev_queue_intr_disable_t queue_intr_disable;
+};
+
+/**
+ * Executes all the user application registered callbacks for the specific
+ * device and event type.
+ *
+ * @param dev
+ * Pointer to the device structure.
+ * @param event
+ * Event type.
+ * @param ret_param
+ * To pass data back to user application.
+ */
+void
+rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
+ enum rte_bbdev_event_type event, void *ret_param);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_BBDEV_PMD_H_ */
diff --git a/lib/librte_bbdev/rte_bbdev_version.map b/lib/librte_bbdev/rte_bbdev_version.map
new file mode 100644
index 0000000..2448f1a
--- /dev/null
+++ b/lib/librte_bbdev/rte_bbdev_version.map
@@ -0,0 +1,37 @@
+EXPERIMENTAL {
+ global:
+
+ rte_bbdev_allocate;
+ rte_bbdev_callback_register;
+ rte_bbdev_callback_unregister;
+ rte_bbdev_close;
+ rte_bbdev_count;
+ rte_bbdev_dequeue_dec_ops;
+ rte_bbdev_dequeue_enc_ops;
+ rte_bbdev_devices;
+ rte_bbdev_enqueue_dec_ops;
+ rte_bbdev_enqueue_enc_ops;
+ rte_bbdev_find_next;
+ rte_bbdev_get_named_dev;
+ rte_bbdev_info_get;
+ rte_bbdev_intr_enable;
+ rte_bbdev_is_valid;
+ rte_bbdev_op_pool_create;
+ rte_bbdev_op_type_str;
+ rte_bbdev_pmd_callback_process;
+ rte_bbdev_queue_configure;
+ rte_bbdev_queue_info_get;
+ rte_bbdev_queue_intr_ctl;
+ rte_bbdev_queue_intr_disable;
+ rte_bbdev_queue_intr_enable;
+ rte_bbdev_queue_start;
+ rte_bbdev_queue_stop;
+ rte_bbdev_release;
+ rte_bbdev_setup_queues;
+ rte_bbdev_start;
+ rte_bbdev_stats_get;
+ rte_bbdev_stats_reset;
+ rte_bbdev_stop;
+
+ local: *;
+};
\ No newline at end of file
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..cefc6f1 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lrte_ethdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += -lrte_cryptodev
_LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_BBDEV) += -lrte_bbdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
_LDLIBS-$(CONFIG_RTE_LIBRTE_RING) += -lrte_ring
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v5 2/5] bbdev: null device driver
2018-01-11 19:23 [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib Amr Mokhtar
@ 2018-01-11 19:23 ` Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver Amr Mokhtar
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Amr Mokhtar @ 2018-01-11 19:23 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara, Amr Mokhtar
- 'bbdev_null' is a basic pmd that performs a minimalistic
bbdev operation
- useful for bbdev smoke testing and in measuring the overhead
introduced by the bbdev library
- 'bbdev_null' pmd is enabled by default
Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
MAINTAINERS | 2 +
config/common_base | 5 +
doc/guides/bbdevs/index.rst | 11 +
doc/guides/bbdevs/null.rst | 49 +++
doc/guides/index.rst | 1 +
drivers/Makefile | 2 +
drivers/bbdev/Makefile | 12 +
drivers/bbdev/null/Makefile | 24 ++
drivers/bbdev/null/bbdev_null.c | 346 ++++++++++++++++++++++
drivers/bbdev/null/rte_pmd_bbdev_null_version.map | 3 +
mk/rte.app.mk | 4 +
11 files changed, 459 insertions(+)
create mode 100644 doc/guides/bbdevs/index.rst
create mode 100644 doc/guides/bbdevs/null.rst
create mode 100644 drivers/bbdev/Makefile
create mode 100644 drivers/bbdev/null/Makefile
create mode 100644 drivers/bbdev/null/bbdev_null.c
create mode 100644 drivers/bbdev/null/rte_pmd_bbdev_null_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index ad4166b..8cb296e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -274,6 +274,8 @@ F: examples/l2fwd-crypto/
BBDEV API - EXPERIMENTAL
M: Amr Mokhtar <amr.mokhtar@intel.com>
F: lib/librte_bbdev/
+F: drivers/bbdev/
+F: doc/guides/bbdevs/
F: doc/guides/prog_guide/bbdev.rst
Security API - EXPERIMENTAL
diff --git a/config/common_base b/config/common_base
index 2f5e4ab..62412ee 100644
--- a/config/common_base
+++ b/config/common_base
@@ -600,6 +600,11 @@ CONFIG_RTE_LIBRTE_BBDEV=y
CONFIG_RTE_BBDEV_MAX_DEVS=128
#
+# Compile PMD for NULL bbdev device
+#
+CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL=y
+
+#
# Compile librte_ring
#
CONFIG_RTE_LIBRTE_RING=y
diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst
new file mode 100644
index 0000000..5cd411b
--- /dev/null
+++ b/doc/guides/bbdevs/index.rst
@@ -0,0 +1,11 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2017 Intel Corporation
+
+Baseband Device Drivers
+=======================
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ null
diff --git a/doc/guides/bbdevs/null.rst b/doc/guides/bbdevs/null.rst
new file mode 100644
index 0000000..9baf2a9
--- /dev/null
+++ b/doc/guides/bbdevs/null.rst
@@ -0,0 +1,49 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2017 Intel Corporation
+
+BBDEV null Poll Mode Driver
+============================
+
+The (**bbdev_null**) is a bbdev poll mode driver which provides a minimal
+implementation of a software bbdev device. As a null device it does not modify
+the data in the mbuf on which the bbdev operation is to operate and it only
+works for operation type ``RTE_BBDEV_OP_NONE``.
+
+When a burst of mbufs is submitted to a *bbdev null PMD* for processing then
+each mbuf in the burst will be enqueued in an internal buffer ring to be
+collected on a dequeue call.
+
+
+Limitations
+-----------
+
+* In-place operations for Turbo encode and decode are not supported
+
+Installation
+------------
+
+The *bbdev null PMD* is enabled and built by default in both the Linux and
+FreeBSD builds.
+
+Initialization
+--------------
+
+To use the PMD in an application, user must:
+
+- Call ``rte_vdev_init("bbdev_null")`` within the application.
+
+- Use ``--vdev="bbdev_null"`` in the EAL options, which will call ``rte_vdev_init()`` internally.
+
+The following parameters (all optional) can be provided in the previous two calls:
+
+* ``socket_id``: Specify the socket where the memory for the device is going to be allocated
+ (by default, *socket_id* will be the socket where the core that is creating the PMD is running on).
+
+* ``max_nb_queues``: Specify the maximum number of queues in the device (default is ``RTE_MAX_LCORE``).
+
+Example:
+~~~~~~~~
+
+.. code-block:: console
+
+ ./test-bbdev.py -e="--vdev=bbdev_null,socket_id=0,max_nb_queues=8"
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index f924a7c..514e2fe 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -44,6 +44,7 @@ DPDK documentation
nics/index
cryptodevs/index
eventdevs/index
+ bbdevs/index
mempool/index
platform/index
contributing/index
diff --git a/drivers/Makefile b/drivers/Makefile
index 57e1a48..6a41c09 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -12,5 +12,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
DEPDIRS-crypto := bus mempool
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
DEPDIRS-event := bus mempool net
+DIRS-$(CONFIG_RTE_LIBRTE_BBDEV) += bbdev
+DEPDIRS-bbdev := bus mempool
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/bbdev/Makefile b/drivers/bbdev/Makefile
new file mode 100644
index 0000000..7611874
--- /dev/null
+++ b/drivers/bbdev/Makefile
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+core-libs := librte_eal librte_mbuf librte_mempool librte_ring
+core-libs += librte_bbdev librte_kvargs librte_cfgfile
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL) += null
+DEPDIRS-null = $(core-libs)
+
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/bbdev/null/Makefile b/drivers/bbdev/null/Makefile
new file mode 100644
index 0000000..6d8469d
--- /dev/null
+++ b/drivers/bbdev/null/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+# library name
+LIB = librte_pmd_bbdev_null.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring -lrte_kvargs
+LDLIBS += -lrte_bbdev
+LDLIBS += -lrte_bus_vdev
+
+# versioning export map
+EXPORT_MAP := rte_pmd_bbdev_null_version.map
+
+# library version
+LIBABIVER := 1
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL) += bbdev_null.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bbdev/null/bbdev_null.c b/drivers/bbdev/null/bbdev_null.c
new file mode 100644
index 0000000..b23d766
--- /dev/null
+++ b/drivers/bbdev/null/bbdev_null.c
@@ -0,0 +1,346 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_bus_vdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_kvargs.h>
+
+#include <rte_bbdev.h>
+#include <rte_bbdev_pmd.h>
+
+#define DRIVER_NAME bbdev_null
+
+/* Initialisation params structure that can be used by null BBDEV driver */
+struct bbdev_null_params {
+ int socket_id; /*< Null BBDEV socket */
+ uint16_t queues_num; /*< Null BBDEV queues number */
+};
+
+/* Accecptable params for null BBDEV devices */
+#define BBDEV_NULL_MAX_NB_QUEUES_ARG "max_nb_queues"
+#define BBDEV_NULL_SOCKET_ID_ARG "socket_id"
+
+static const char * const bbdev_null_valid_params[] = {
+ BBDEV_NULL_MAX_NB_QUEUES_ARG,
+ BBDEV_NULL_SOCKET_ID_ARG
+};
+
+/* private data structure */
+struct bbdev_private {
+ unsigned int max_nb_queues; /**< Max number of queues */
+};
+
+/* queue */
+struct bbdev_queue {
+ struct rte_ring *processed_pkts; /* Ring for processed packets */
+} __rte_cache_aligned;
+
+/* Get device info */
+static void
+info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
+{
+ struct bbdev_private *internals = dev->data->dev_private;
+
+ static const struct rte_bbdev_op_cap bbdev_capabilities[] = {
+ RTE_BBDEV_END_OF_CAPABILITIES_LIST(),
+ };
+
+ static struct rte_bbdev_queue_conf default_queue_conf = {
+ .queue_size = RTE_BBDEV_QUEUE_SIZE_LIMIT,
+ };
+
+ default_queue_conf.socket = dev->data->socket_id;
+
+ dev_info->driver_name = RTE_STR(DRIVER_NAME);
+ dev_info->max_num_queues = internals->max_nb_queues;
+ dev_info->queue_size_lim = RTE_BBDEV_QUEUE_SIZE_LIMIT;
+ dev_info->hardware_accelerated = false;
+ dev_info->max_queue_priority = 0;
+ dev_info->default_queue_conf = default_queue_conf;
+ dev_info->capabilities = bbdev_capabilities;
+ dev_info->cpu_flag_reqs = NULL;
+ dev_info->min_alignment = 0;
+
+ rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
+}
+
+/* Release queue */
+static int
+q_release(struct rte_bbdev *dev, uint16_t q_id)
+{
+ struct bbdev_queue *q = dev->data->queues[q_id].queue_private;
+
+ if (q != NULL) {
+ rte_ring_free(q->processed_pkts);
+ rte_free(q);
+ dev->data->queues[q_id].queue_private = NULL;
+ }
+
+ rte_bbdev_log_debug("released device queue %u:%u",
+ dev->data->dev_id, q_id);
+ return 0;
+}
+
+/* Setup a queue */
+static int
+q_setup(struct rte_bbdev *dev, uint16_t q_id,
+ const struct rte_bbdev_queue_conf *queue_conf)
+{
+ struct bbdev_queue *q;
+ char ring_name[RTE_RING_NAMESIZE];
+ snprintf(ring_name, RTE_RING_NAMESIZE, RTE_STR(DRIVER_NAME) "%u:%u",
+ dev->data->dev_id, q_id);
+
+ /* Allocate the queue data structure. */
+ q = rte_zmalloc_socket(RTE_STR(DRIVER_NAME), sizeof(*q),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q == NULL) {
+ rte_bbdev_log(ERR, "Failed to allocate queue memory");
+ return -ENOMEM;
+ }
+
+ q->processed_pkts = rte_ring_create(ring_name, queue_conf->queue_size,
+ queue_conf->socket, RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (q->processed_pkts == NULL) {
+ rte_bbdev_log(ERR, "Failed to create ring");
+ goto free_q;
+ }
+
+ dev->data->queues[q_id].queue_private = q;
+ rte_bbdev_log_debug("setup device queue %s", ring_name);
+ return 0;
+
+free_q:
+ rte_free(q);
+ return -EFAULT;
+}
+
+static const struct rte_bbdev_ops pmd_ops = {
+ .info_get = info_get,
+ .queue_setup = q_setup,
+ .queue_release = q_release
+};
+
+/* Enqueue decode burst */
+static uint16_t
+enqueue_dec_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_dec_op **ops, uint16_t nb_ops)
+{
+ struct bbdev_queue *q = q_data->queue_private;
+ uint16_t nb_enqueued = rte_ring_enqueue_burst(q->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+
+ q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued;
+ q_data->queue_stats.enqueued_count += nb_enqueued;
+
+ return nb_enqueued;
+}
+
+/* Enqueue encode burst */
+static uint16_t
+enqueue_enc_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_enc_op **ops, uint16_t nb_ops)
+{
+ struct bbdev_queue *q = q_data->queue_private;
+ uint16_t nb_enqueued = rte_ring_enqueue_burst(q->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+
+ q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued;
+ q_data->queue_stats.enqueued_count += nb_enqueued;
+
+ return nb_enqueued;
+}
+
+/* Dequeue decode burst */
+static uint16_t
+dequeue_dec_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_dec_op **ops, uint16_t nb_ops)
+{
+ struct bbdev_queue *q = q_data->queue_private;
+ uint16_t nb_dequeued = rte_ring_dequeue_burst(q->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+ q_data->queue_stats.dequeued_count += nb_dequeued;
+
+ return nb_dequeued;
+}
+
+/* Dequeue encode burst */
+static uint16_t
+dequeue_enc_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_enc_op **ops, uint16_t nb_ops)
+{
+ struct bbdev_queue *q = q_data->queue_private;
+ uint16_t nb_dequeued = rte_ring_dequeue_burst(q->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+ q_data->queue_stats.dequeued_count += nb_dequeued;
+
+ return nb_dequeued;
+}
+
+/* Parse 16bit integer from string argument */
+static inline int
+parse_u16_arg(const char *key, const char *value, void *extra_args)
+{
+ uint16_t *u16 = extra_args;
+ unsigned int long result;
+
+ if ((value == NULL) || (extra_args == NULL))
+ return -EINVAL;
+ errno = 0;
+ result = strtoul(value, NULL, 0);
+ if ((result >= (1 << 16)) || (errno != 0)) {
+ rte_bbdev_log(ERR, "Invalid value %lu for %s", result, key);
+ return -ERANGE;
+ }
+ *u16 = (uint16_t)result;
+ return 0;
+}
+
+/* Parse parameters used to create device */
+static int
+parse_bbdev_null_params(struct bbdev_null_params *params,
+ const char *input_args)
+{
+ struct rte_kvargs *kvlist = NULL;
+ int ret = 0;
+
+ if (params == NULL)
+ return -EINVAL;
+ if (input_args) {
+ kvlist = rte_kvargs_parse(input_args, bbdev_null_valid_params);
+ if (kvlist == NULL)
+ return -EFAULT;
+
+ ret = rte_kvargs_process(kvlist, bbdev_null_valid_params[0],
+ &parse_u16_arg, ¶ms->queues_num);
+ if (ret < 0)
+ goto exit;
+
+ ret = rte_kvargs_process(kvlist, bbdev_null_valid_params[1],
+ &parse_u16_arg, ¶ms->socket_id);
+ if (ret < 0)
+ goto exit;
+
+ if (params->socket_id >= RTE_MAX_NUMA_NODES) {
+ rte_bbdev_log(ERR, "Invalid socket, must be < %u",
+ RTE_MAX_NUMA_NODES);
+ goto exit;
+ }
+ }
+
+exit:
+ if (kvlist)
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+/* Create device */
+static int
+null_bbdev_create(struct rte_vdev_device *vdev,
+ struct bbdev_null_params *init_params)
+{
+ struct rte_bbdev *bbdev;
+ const char *name = rte_vdev_device_name(vdev);
+
+ bbdev = rte_bbdev_allocate(name);
+ if (bbdev == NULL)
+ return -ENODEV;
+
+ bbdev->data->dev_private = rte_zmalloc_socket(name,
+ sizeof(struct bbdev_private), RTE_CACHE_LINE_SIZE,
+ init_params->socket_id);
+ if (bbdev->data->dev_private == NULL) {
+ rte_bbdev_release(bbdev);
+ return -ENOMEM;
+ }
+
+ bbdev->dev_ops = &pmd_ops;
+ bbdev->device = &vdev->device;
+ bbdev->data->socket_id = init_params->socket_id;
+ bbdev->intr_handle = NULL;
+
+ /* register rx/tx burst functions for data path */
+ bbdev->dequeue_enc_ops = dequeue_enc_ops;
+ bbdev->dequeue_dec_ops = dequeue_dec_ops;
+ bbdev->enqueue_enc_ops = enqueue_enc_ops;
+ bbdev->enqueue_dec_ops = enqueue_dec_ops;
+ ((struct bbdev_private *) bbdev->data->dev_private)->max_nb_queues =
+ init_params->queues_num;
+
+ return 0;
+}
+
+/* Initialise device */
+static int
+null_bbdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bbdev_null_params init_params = {
+ rte_socket_id(),
+ RTE_BBDEV_DEFAULT_MAX_NB_QUEUES
+ };
+ const char *name;
+ const char *input_args;
+
+ if (vdev == NULL)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+
+ input_args = rte_vdev_device_args(vdev);
+ parse_bbdev_null_params(&init_params, input_args);
+
+ rte_bbdev_log_debug("Init %s on NUMA node %d with max queues: %d",
+ name, init_params.socket_id, init_params.queues_num);
+
+ return null_bbdev_create(vdev, &init_params);
+}
+
+/* Uninitialise device */
+static int
+null_bbdev_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_bbdev *bbdev;
+ const char *name;
+
+ if (vdev == NULL)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+
+ bbdev = rte_bbdev_get_named_dev(name);
+ if (bbdev == NULL)
+ return -EINVAL;
+
+ rte_free(bbdev->data->dev_private);
+
+ return rte_bbdev_release(bbdev);
+}
+
+static struct rte_vdev_driver bbdev_null_pmd_drv = {
+ .probe = null_bbdev_probe,
+ .remove = null_bbdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_null_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME,
+ BBDEV_NULL_MAX_NB_QUEUES_ARG"=<int> "
+ BBDEV_NULL_SOCKET_ID_ARG"=<int>");
+
+int bbdev_logtype;
+RTE_INIT(null_bbdev_init_log);
+static void
+null_bbdev_init_log(void)
+{
+ bbdev_logtype = rte_log_register("pmd.bbdev.null");
+ if (bbdev_logtype >= 0)
+ rte_log_set_level(bbdev_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bbdev/null/rte_pmd_bbdev_null_version.map b/drivers/bbdev/null/rte_pmd_bbdev_null_version.map
new file mode 100644
index 0000000..58b9427
--- /dev/null
+++ b/drivers/bbdev/null/rte_pmd_bbdev_null_version.map
@@ -0,0 +1,3 @@
+DPDK_18.02 {
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index cefc6f1..5c3444f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += -lrte_pmd_vhost
endif # $(CONFIG_RTE_LIBRTE_VHOST)
_LDLIBS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += -lrte_pmd_vmxnet3_uio
+ifeq ($(CONFIG_RTE_LIBRTE_BBDEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL) += -lrte_pmd_bbdev_null
+endif # CONFIG_RTE_LIBRTE_BBDEV
+
ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -lrte_pmd_aesni_mb
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver
2018-01-11 19:23 [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 2/5] bbdev: null device driver Amr Mokhtar
@ 2018-01-11 19:23 ` Amr Mokhtar
2018-01-19 0:23 ` Thomas Monjalon
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 4/5] bbdev: test applications Amr Mokhtar
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Amr Mokhtar @ 2018-01-11 19:23 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara, Amr Mokhtar
- bbdev 'turbo_sw' is the software accelerated version of 3GPP L1
Turbo coding operation using the optimized Intel FlexRAN SDK libraries.
- 'turbo_sw' pmd is disabled by default
Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
config/common_base | 5 +
doc/guides/bbdevs/index.rst | 1 +
doc/guides/bbdevs/turbo_sw.rst | 147 +++
drivers/bbdev/Makefile | 2 +
drivers/bbdev/turbo_sw/Makefile | 41 +
drivers/bbdev/turbo_sw/bbdev_turbo_software.c | 1206 ++++++++++++++++++++
.../turbo_sw/rte_pmd_bbdev_turbo_sw_version.map | 3 +
mk/rte.app.mk | 8 +
8 files changed, 1413 insertions(+)
create mode 100644 doc/guides/bbdevs/turbo_sw.rst
create mode 100644 drivers/bbdev/turbo_sw/Makefile
create mode 100644 drivers/bbdev/turbo_sw/bbdev_turbo_software.c
create mode 100644 drivers/bbdev/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
diff --git a/config/common_base b/config/common_base
index 62412ee..17d96a4 100644
--- a/config/common_base
+++ b/config/common_base
@@ -605,6 +605,11 @@ CONFIG_RTE_BBDEV_MAX_DEVS=128
CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL=y
#
+# Compile PMD for turbo software bbdev device
+#
+CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=n
+
+#
# Compile librte_ring
#
CONFIG_RTE_LIBRTE_RING=y
diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst
index 5cd411b..93276ed 100644
--- a/doc/guides/bbdevs/index.rst
+++ b/doc/guides/bbdevs/index.rst
@@ -9,3 +9,4 @@ Baseband Device Drivers
:numbered:
null
+ turbo_sw
diff --git a/doc/guides/bbdevs/turbo_sw.rst b/doc/guides/bbdevs/turbo_sw.rst
new file mode 100644
index 0000000..b3fed16
--- /dev/null
+++ b/doc/guides/bbdevs/turbo_sw.rst
@@ -0,0 +1,147 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2017 Intel Corporation
+
+SW Turbo Poll Mode Driver
+=========================
+
+The SW Turbo PMD (**turbo_sw**) provides a poll mode bbdev driver that utilizes
+Intel optimized libraries for LTE Layer 1 workloads acceleration. This PMD
+supports the functions: Turbo FEC, Rate Matching and CRC functions.
+
+Features
+--------
+
+SW Turbo PMD has support for the following capabilities:
+
+For the encode operation:
+
+* ``RTE_BBDEV_TURBO_CRC_24A_ATTACH``
+* ``RTE_BBDEV_TURBO_CRC_24B_ATTACH``
+* ``RTE_BBDEV_TURBO_RATE_MATCH``
+* ``RTE_BBDEV_TURBO_RV_INDEX_BYPASS``
+
+For the decode operation:
+
+* ``RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE``
+* ``RTE_BBDEV_TURBO_CRC_TYPE_24B``
+* ``RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN``
+* ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN``
+
+
+Limitations
+-----------
+
+* In-place operations for Turbo encode and decode are not supported
+
+Installation
+------------
+
+FlexRAN SDK Download
+~~~~~~~~~~~~~~~~~~~~
+
+To build DPDK with the *turbo_sw* PMD the user is required to download
+the export controlled ``FlexRAN SDK`` Libraries. An account at Intel Resource
+Design Center needs to be registered from
+`<https://www.intel.com/content/www/us/en/design/resource-design-center.html>`_.
+
+Once registered, the user needs to log in, and look for
+*Intel SWA_SW_FlexRAN_Release_Package R1_3_0* and click for download. Or use
+this direct download link `<https://cdrd.intel.com/v1/dl/getContent/575367>`_.
+
+After download is complete, the user needs to unpack and compile on their
+system before building DPDK.
+
+FlexRAN SDK Installation
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following are pre-requisites for building FlexRAN SDK Libraries:
+ (a) An AVX2 supporting machine
+ (b) Windriver TS 2 or CentOS 7 operating systems
+ (c) Intel ICC compiler installed
+
+The following instructions should be followed in this exact order:
+
+#. Set the environment variables:
+
+ .. code-block:: console
+
+ source <path-to-icc-compiler-install-folder>/linux/bin/compilervars.sh intel64 -platform linux
+
+
+#. Extract the ``FlexRAN-1.3.0.tar.gz.zip`` package, then run the SDK extractor
+ script and accept the license:
+
+ .. code-block:: console
+
+ cd <path-to-workspace>/FlexRAN-1.3.0/
+ ./SDK-R1.3.0.sh
+
+#. To allow ``FlexRAN SDK R1.3.0`` to work with bbdev properly, the following
+ hotfix is required. Change the return of function ``rate_matching_turbo_lte_avx2()``
+ located in file
+ ``<path-to-workspace>/FlexRAN-1.3.0/SDK-R1.3.0/sdk/source/phy/lib_rate_matching/phy_rate_match_avx2.cpp``
+ to return 0 instead of 1.
+
+ .. code-block:: c
+
+ - return 1;
+ + return 0;
+
+#. Generate makefiles based on system configuration:
+
+ .. code-block:: console
+
+ cd <path-to-workspace>/FlexRAN-1.3.0/SDK-R1.3.0/sdk/
+ ./create-makefiles-linux.sh
+
+#. A build folder is generated in this form ``build-<ISA>-<CC>``, enter that
+ folder and install:
+
+ .. code-block:: console
+
+ cd build-avx2-icc/
+ make install
+
+
+Initialization
+--------------
+
+In order to enable this virtual bbdev PMD, the user must:
+
+* Build the ``FLEXRAN SDK`` libraries (explained in Installation section).
+
+* Export the environmental variables ``FLEXRAN_SDK`` to the path where the
+ FlexRAN SDK libraries were installed. And ``DIR_WIRELESS_SDK`` to the path
+ where the libraries were extracted.
+
+Example:
+
+.. code-block:: console
+
+ export FLEXRAN_SDK=<path-to-workspace>/FlexRAN-1.3.0/SDK-R1.3.0/sdk/build-avx2-icc/install
+ export DIR_WIRELESS_SDK=<path-to-workspace>/FlexRAN-1.3.0/SDK-R1.3.0/sdk/
+
+
+* Set ``CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=y`` in DPDK common configuration
+ file ``config/common_base``.
+
+To use the PMD in an application, user must:
+
+- Call ``rte_vdev_init("turbo_sw")`` within the application.
+
+- Use ``--vdev="turbo_sw"`` in the EAL options, which will call ``rte_vdev_init()`` internally.
+
+The following parameters (all optional) can be provided in the previous two calls:
+
+* ``socket_id``: Specify the socket where the memory for the device is going to be allocated
+ (by default, *socket_id* will be the socket where the core that is creating the PMD is running on).
+
+* ``max_nb_queues``: Specify the maximum number of queues in the device (default is ``RTE_MAX_LCORE``).
+
+Example:
+~~~~~~~~
+
+.. code-block:: console
+
+ ./test-bbdev.py -e="--vdev=turbo_sw,socket_id=0,max_nb_queues=8" \
+ -c validation -v ./test_vectors/bbdev_vector_t?_default.data
diff --git a/drivers/bbdev/Makefile b/drivers/bbdev/Makefile
index 7611874..4ec83b0 100644
--- a/drivers/bbdev/Makefile
+++ b/drivers/bbdev/Makefile
@@ -8,5 +8,7 @@ core-libs += librte_bbdev librte_kvargs librte_cfgfile
DIRS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL) += null
DEPDIRS-null = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += turbo_sw
+DEPDIRS-turbo_sw = $(core-libs)
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/bbdev/turbo_sw/Makefile b/drivers/bbdev/turbo_sw/Makefile
new file mode 100644
index 0000000..08f24b0
--- /dev/null
+++ b/drivers/bbdev/turbo_sw/Makefile
@@ -0,0 +1,41 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+ifeq ($(FLEXRAN_SDK),)
+$(error "Please define FLEXRAN_SDK environment variable")
+endif
+
+# library name
+LIB = librte_pmd_bbdev_turbo_sw.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring -lrte_kvargs
+LDLIBS += -lrte_bbdev
+LDLIBS += -lrte_bus_vdev
+
+# versioning export map
+EXPORT_MAP := rte_pmd_bbdev_turbo_sw_version.map
+
+# external library dependencies
+CFLAGS += -I$(FLEXRAN_SDK)/lib_common
+CFLAGS += -I$(FLEXRAN_SDK)/lib_turbo
+CFLAGS += -I$(FLEXRAN_SDK)/lib_crc
+CFLAGS += -I$(FLEXRAN_SDK)/lib_rate_matching
+
+LDLIBS += -L$(FLEXRAN_SDK)/lib_crc -lcrc
+LDLIBS += -L$(FLEXRAN_SDK)/lib_turbo -lturbo
+LDLIBS += -L$(FLEXRAN_SDK)/lib_rate_matching -lrate_matching
+LDLIBS += -L$(FLEXRAN_SDK)/lib_common -lcommon
+LDLIBS += -lstdc++ -lirc -limf -lipps
+
+# library version
+LIBABIVER := 1
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += bbdev_turbo_software.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/bbdev/turbo_sw/bbdev_turbo_software.c b/drivers/bbdev/turbo_sw/bbdev_turbo_software.c
new file mode 100644
index 0000000..981da6e
--- /dev/null
+++ b/drivers/bbdev/turbo_sw/bbdev_turbo_software.c
@@ -0,0 +1,1206 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_bus_vdev.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_kvargs.h>
+
+#include <rte_bbdev.h>
+#include <rte_bbdev_pmd.h>
+
+#include <phy_turbo.h>
+#include <phy_crc.h>
+#include <phy_rate_match.h>
+#include <divide.h>
+
+#define DRIVER_NAME turbo_sw
+
+/* Number of columns in sub-block interleaver (36.212, section 5.1.4.1.1) */
+#define C_SUBBLOCK (32)
+#define MAX_TB_SIZE (391656)
+#define MAX_CB_SIZE (6144)
+#define MAX_KW (18528)
+
+/* private data structure */
+struct bbdev_private {
+ unsigned int max_nb_queues; /**< Max number of queues */
+};
+
+/* Initialisation params structure that can be used by Turbo SW driver */
+struct turbo_sw_params {
+ int socket_id; /*< Turbo SW device socket */
+ uint16_t queues_num; /*< Turbo SW device queues number */
+};
+
+/* Accecptable params for Turbo SW devices */
+#define TURBO_SW_MAX_NB_QUEUES_ARG "max_nb_queues"
+#define TURBO_SW_SOCKET_ID_ARG "socket_id"
+
+static const char * const turbo_sw_valid_params[] = {
+ TURBO_SW_MAX_NB_QUEUES_ARG,
+ TURBO_SW_SOCKET_ID_ARG
+};
+
+/* queue */
+struct turbo_sw_queue {
+ /* Ring for processed (encoded/decoded) operations which are ready to
+ * be dequeued.
+ */
+ struct rte_ring *processed_pkts;
+ /* Stores input for turbo encoder (used when CRC attachment is
+ * performed
+ */
+ uint8_t *enc_in;
+ /* Stores output from turbo encoder */
+ uint8_t *enc_out;
+ /* Alpha gamma buf for bblib_turbo_decoder() function */
+ int8_t *ag;
+ /* Temp buf for bblib_turbo_decoder() function */
+ uint16_t *code_block;
+ /* Input buf for bblib_rate_dematching_lte() function */
+ uint8_t *deint_input;
+ /* Output buf for bblib_rate_dematching_lte() function */
+ uint8_t *deint_output;
+ /* Output buf for bblib_turbodec_adapter_lte() function */
+ uint8_t *adapter_output;
+ /* Operation type of this queue */
+ enum rte_bbdev_op_type type;
+} __rte_cache_aligned;
+
+/* Calculate index based on Table 5.1.3-3 from TS34.212 */
+static inline int32_t
+compute_idx(uint16_t k)
+{
+ int32_t result = 0;
+
+ if (k < 40 || k > MAX_CB_SIZE)
+ return -1;
+
+ if (k > 2048) {
+ if ((k - 2048) % 64 != 0)
+ result = -1;
+
+ result = 124 + (k - 2048) / 64;
+ } else if (k <= 512) {
+ if ((k - 40) % 8 != 0)
+ result = -1;
+
+ result = (k - 40) / 8 + 1;
+ } else if (k <= 1024) {
+ if ((k - 512) % 16 != 0)
+ result = -1;
+
+ result = 60 + (k - 512) / 16;
+ } else { /* 1024 < k <= 2048 */
+ if ((k - 1024) % 32 != 0)
+ result = -1;
+
+ result = 92 + (k - 1024) / 32;
+ }
+
+ return result;
+}
+
+/* Read flag value 0/1 from bitmap */
+static inline bool
+check_bit(uint32_t bitmap, uint32_t bitmask)
+{
+ return bitmap & bitmask;
+}
+
+/* Get device info */
+static void
+info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
+{
+ struct bbdev_private *internals = dev->data->dev_private;
+
+ static const struct rte_bbdev_op_cap bbdev_capabilities[] = {
+ {
+ .type = RTE_BBDEV_OP_TURBO_DEC,
+ .cap.turbo_dec = {
+ .capability_flags =
+ RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE |
+ RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN |
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN |
+ RTE_BBDEV_TURBO_CRC_TYPE_24B |
+ RTE_BBDEV_TURBO_EARLY_TERMINATION,
+ .num_buffers_src = RTE_BBDEV_MAX_CODE_BLOCKS,
+ .num_buffers_hard_out =
+ RTE_BBDEV_MAX_CODE_BLOCKS,
+ .num_buffers_soft_out = 0,
+ }
+ },
+ {
+ .type = RTE_BBDEV_OP_TURBO_ENC,
+ .cap.turbo_enc = {
+ .capability_flags =
+ RTE_BBDEV_TURBO_CRC_24B_ATTACH |
+ RTE_BBDEV_TURBO_CRC_24A_ATTACH |
+ RTE_BBDEV_TURBO_RATE_MATCH |
+ RTE_BBDEV_TURBO_RV_INDEX_BYPASS,
+ .num_buffers_src = RTE_BBDEV_MAX_CODE_BLOCKS,
+ .num_buffers_dst = RTE_BBDEV_MAX_CODE_BLOCKS,
+ }
+ },
+ RTE_BBDEV_END_OF_CAPABILITIES_LIST()
+ };
+
+ static struct rte_bbdev_queue_conf default_queue_conf = {
+ .queue_size = RTE_BBDEV_QUEUE_SIZE_LIMIT,
+ };
+
+ static const enum rte_cpu_flag_t cpu_flag = RTE_CPUFLAG_SSE4_2;
+
+ default_queue_conf.socket = dev->data->socket_id;
+
+ dev_info->driver_name = RTE_STR(DRIVER_NAME);
+ dev_info->max_num_queues = internals->max_nb_queues;
+ dev_info->queue_size_lim = RTE_BBDEV_QUEUE_SIZE_LIMIT;
+ dev_info->hardware_accelerated = false;
+ dev_info->max_queue_priority = 0;
+ dev_info->default_queue_conf = default_queue_conf;
+ dev_info->capabilities = bbdev_capabilities;
+ dev_info->cpu_flag_reqs = &cpu_flag;
+ dev_info->min_alignment = 64;
+
+ rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
+}
+
+/* Release queue */
+static int
+q_release(struct rte_bbdev *dev, uint16_t q_id)
+{
+ struct turbo_sw_queue *q = dev->data->queues[q_id].queue_private;
+
+ if (q != NULL) {
+ rte_ring_free(q->processed_pkts);
+ rte_free(q->enc_out);
+ rte_free(q->enc_in);
+ rte_free(q->ag);
+ rte_free(q->code_block);
+ rte_free(q->deint_input);
+ rte_free(q->deint_output);
+ rte_free(q->adapter_output);
+ rte_free(q);
+ dev->data->queues[q_id].queue_private = NULL;
+ }
+
+ rte_bbdev_log_debug("released device queue %u:%u",
+ dev->data->dev_id, q_id);
+ return 0;
+}
+
+/* Setup a queue */
+static int
+q_setup(struct rte_bbdev *dev, uint16_t q_id,
+ const struct rte_bbdev_queue_conf *queue_conf)
+{
+ int ret;
+ struct turbo_sw_queue *q;
+ char name[RTE_RING_NAMESIZE];
+
+ /* Allocate the queue data structure. */
+ q = rte_zmalloc_socket(RTE_STR(DRIVER_NAME), sizeof(*q),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q == NULL) {
+ rte_bbdev_log(ERR, "Failed to allocate queue memory");
+ return -ENOMEM;
+ }
+
+ /* Allocate memory for encoder output. */
+ ret = snprintf(name, RTE_RING_NAMESIZE, RTE_STR(DRIVER_NAME)"_enc_out%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->enc_out = rte_zmalloc_socket(name,
+ ((MAX_TB_SIZE >> 3) + 3) * sizeof(*q->enc_out) * 3,
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->enc_out == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Allocate memory for rate matching output. */
+ ret = snprintf(name, RTE_RING_NAMESIZE,
+ RTE_STR(DRIVER_NAME)"_enc_in%u:%u", dev->data->dev_id,
+ q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->enc_in = rte_zmalloc_socket(name,
+ (MAX_CB_SIZE >> 3) * sizeof(*q->enc_in),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->enc_in == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Allocate memory for Aplha Gamma temp buffer. */
+ ret = snprintf(name, RTE_RING_NAMESIZE, RTE_STR(DRIVER_NAME)"_ag%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->ag = rte_zmalloc_socket(name,
+ MAX_CB_SIZE * 10 * sizeof(*q->ag),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->ag == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Allocate memory for code block temp buffer. */
+ ret = snprintf(name, RTE_RING_NAMESIZE, RTE_STR(DRIVER_NAME)"_cb%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->code_block = rte_zmalloc_socket(name,
+ (6144 >> 3) * sizeof(*q->code_block),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->code_block == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Allocate memory for Deinterleaver input. */
+ ret = snprintf(name, RTE_RING_NAMESIZE,
+ RTE_STR(DRIVER_NAME)"_deint_input%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->deint_input = rte_zmalloc_socket(name,
+ MAX_KW * sizeof(*q->deint_input),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->deint_input == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Allocate memory for Deinterleaver output. */
+ ret = snprintf(name, RTE_RING_NAMESIZE,
+ RTE_STR(DRIVER_NAME)"_deint_output%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->deint_output = rte_zmalloc_socket(NULL,
+ MAX_KW * sizeof(*q->deint_output),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->deint_output == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Allocate memory for Adapter output. */
+ ret = snprintf(name, RTE_RING_NAMESIZE,
+ RTE_STR(DRIVER_NAME)"_adapter_output%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->adapter_output = rte_zmalloc_socket(NULL,
+ MAX_CB_SIZE * 6 * sizeof(*q->adapter_output),
+ RTE_CACHE_LINE_SIZE, queue_conf->socket);
+ if (q->adapter_output == NULL) {
+ rte_bbdev_log(ERR,
+ "Failed to allocate queue memory for %s", name);
+ goto free_q;
+ }
+
+ /* Create ring for packets awaiting to be dequeued. */
+ ret = snprintf(name, RTE_RING_NAMESIZE, RTE_STR(DRIVER_NAME)"%u:%u",
+ dev->data->dev_id, q_id);
+ if ((ret < 0) || (ret >= (int)RTE_RING_NAMESIZE)) {
+ rte_bbdev_log(ERR,
+ "Creating queue name for device %u queue %u failed",
+ dev->data->dev_id, q_id);
+ return -ENAMETOOLONG;
+ }
+ q->processed_pkts = rte_ring_create(name, queue_conf->queue_size,
+ queue_conf->socket, RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (q->processed_pkts == NULL) {
+ rte_bbdev_log(ERR, "Failed to create ring for %s", name);
+ goto free_q;
+ }
+
+ q->type = queue_conf->op_type;
+
+ dev->data->queues[q_id].queue_private = q;
+ rte_bbdev_log_debug("setup device queue %s", name);
+ return 0;
+
+free_q:
+ rte_ring_free(q->processed_pkts);
+ rte_free(q->enc_out);
+ rte_free(q->enc_in);
+ rte_free(q->ag);
+ rte_free(q->code_block);
+ rte_free(q->deint_input);
+ rte_free(q->deint_output);
+ rte_free(q->adapter_output);
+ rte_free(q);
+ return -EFAULT;
+}
+
+static const struct rte_bbdev_ops pmd_ops = {
+ .info_get = info_get,
+ .queue_setup = q_setup,
+ .queue_release = q_release
+};
+
+/* Checks if the encoder input buffer is correct.
+ * Returns 0 if it's valid, -1 otherwise.
+ */
+static inline int
+is_enc_input_valid(const uint16_t k, const int32_t k_idx,
+ const uint16_t in_length)
+{
+ if (k_idx < 0) {
+ rte_bbdev_log(ERR, "K Index is invalid");
+ return -1;
+ }
+
+ if (in_length - (k >> 3) < 0) {
+ rte_bbdev_log(ERR,
+ "Mismatch between input length (%u bytes) and K (%u bits)",
+ in_length, k);
+ return -1;
+ }
+
+ if (k > MAX_CB_SIZE) {
+ rte_bbdev_log(ERR, "CB size (%u) is too big, max: %d",
+ k, MAX_CB_SIZE);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* Checks if the decoder input buffer is correct.
+ * Returns 0 if it's valid, -1 otherwise.
+ */
+static inline int
+is_dec_input_valid(int32_t k_idx, int16_t kw, int16_t in_length)
+{
+ if (k_idx < 0) {
+ rte_bbdev_log(ERR, "K index is invalid");
+ return -1;
+ }
+
+ if (in_length - kw < 0) {
+ rte_bbdev_log(ERR,
+ "Mismatch between input length (%u) and kw (%u)",
+ in_length, kw);
+ return -1;
+ }
+
+ if (kw > MAX_KW) {
+ rte_bbdev_log(ERR, "Input length (%u) is too big, max: %d",
+ kw, MAX_KW);
+ return -1;
+ }
+
+ return 0;
+}
+
+static inline void
+process_enc_cb(struct turbo_sw_queue *q, struct rte_bbdev_enc_op *op,
+ uint8_t cb_idx, uint8_t c, uint16_t k, uint16_t ncb,
+ uint32_t e, struct rte_mbuf *m_in, struct rte_mbuf *m_out,
+ uint16_t in_offset, uint16_t out_offset, uint16_t total_left)
+{
+ int ret;
+ int16_t k_idx;
+ uint16_t m;
+ uint8_t *in, *out0, *out1, *out2, *tmp_out, *rm_out;
+ struct rte_bbdev_op_turbo_enc *enc = &op->turbo_enc;
+ struct bblib_crc_request crc_req;
+ struct bblib_turbo_encoder_request turbo_req;
+ struct bblib_turbo_encoder_response turbo_resp;
+ struct bblib_rate_match_dl_request rm_req;
+ struct bblib_rate_match_dl_response rm_resp;
+
+ k_idx = compute_idx(k);
+ in = rte_pktmbuf_mtod_offset(m_in, uint8_t *, in_offset);
+
+ /* CRC24A (for TB) */
+ if ((enc->op_flags & RTE_BBDEV_TURBO_CRC_24A_ATTACH) &&
+ (enc->code_block_mode == 1)) {
+ ret = is_enc_input_valid(k - 24, k_idx, total_left);
+ if (ret != 0) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+ /* copy the input to the temporary buffer to be able to extend
+ * it by 3 CRC bytes
+ */
+ rte_memcpy(q->enc_in, in, (k - 24) >> 3);
+ crc_req.data = q->enc_in;
+ crc_req.len = (k - 24) >> 3;
+ if (bblib_lte_crc24a_gen(&crc_req) == -1) {
+ op->status |= 1 << RTE_BBDEV_CRC_ERROR;
+ rte_bbdev_log(ERR, "CRC24a generation failed");
+ return;
+ }
+ in = q->enc_in;
+ } else if (enc->op_flags & RTE_BBDEV_TURBO_CRC_24B_ATTACH) {
+ /* CRC24B */
+ ret = is_enc_input_valid(k - 24, k_idx, total_left);
+ if (ret != 0) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+ /* copy the input to the temporary buffer to be able to extend
+ * it by 3 CRC bytes
+ */
+ rte_memcpy(q->enc_in, in, (k - 24) >> 3);
+ crc_req.data = q->enc_in;
+ crc_req.len = (k - 24) >> 3;
+ if (bblib_lte_crc24b_gen(&crc_req) == -1) {
+ op->status |= 1 << RTE_BBDEV_CRC_ERROR;
+ rte_bbdev_log(ERR, "CRC24b generation failed");
+ return;
+ }
+ in = q->enc_in;
+ } else {
+ ret = is_enc_input_valid(k, k_idx, total_left);
+ if (ret != 0) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+ }
+
+ /* Turbo encoder */
+
+ /* Each bit layer output from turbo encoder is (k+4) bits long, i.e.
+ * input length + 4 tail bits. That's (k/8) + 1 bytes after rounding up.
+ * So dst_data's length should be 3*(k/8) + 3 bytes.
+ */
+ out0 = q->enc_out;
+ out1 = RTE_PTR_ADD(out0, (k >> 3) + 1);
+ out2 = RTE_PTR_ADD(out1, (k >> 3) + 1);
+
+ turbo_req.case_id = k_idx;
+ turbo_req.input_win = in;
+ turbo_req.length = k >> 3;
+ turbo_resp.output_win_0 = out0;
+ turbo_resp.output_win_1 = out1;
+ turbo_resp.output_win_2 = out2;
+ if (bblib_turbo_encoder(&turbo_req, &turbo_resp) != 0) {
+ op->status |= 1 << RTE_BBDEV_DRV_ERROR;
+ rte_bbdev_log(ERR, "Turbo Encoder failed");
+ return;
+ }
+
+ /* Rate-matching */
+ if (enc->op_flags & RTE_BBDEV_TURBO_RATE_MATCH) {
+ /* get output data starting address */
+ rm_out = (uint8_t *)rte_pktmbuf_append(m_out, (e >> 3));
+ if (rm_out == NULL) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ rte_bbdev_log(ERR,
+ "Too little space in output mbuf");
+ return;
+ }
+ /* rte_bbdev_op_data.offset can be different than the offset
+ * of the appended bytes
+ */
+ rm_out = rte_pktmbuf_mtod_offset(m_out, uint8_t *, out_offset);
+
+ /* index of current code block */
+ rm_req.r = cb_idx;
+ /* total number of code block */
+ rm_req.C = c;
+ /* For DL - 1, UL - 0 */
+ rm_req.direction = 1;
+ /* According to 3ggp 36.212 Spec 5.1.4.1.2 section Nsoft, KMIMO
+ * and MDL_HARQ are used for Ncb calculation. As Ncb is already
+ * known we can adjust those parameters
+ */
+ rm_req.Nsoft = ncb * rm_req.C;
+ rm_req.KMIMO = 1;
+ rm_req.MDL_HARQ = 1;
+ /* According to 3ggp 36.212 Spec 5.1.4.1.2 section Nl, Qm and G
+ * are used for E calculation. As E is already known we can
+ * adjust those parameters
+ */
+ rm_req.NL = e;
+ rm_req.Qm = 1;
+ rm_req.G = rm_req.NL * rm_req.Qm * rm_req.C;
+
+ rm_req.rvidx = enc->rv_index;
+ rm_req.Kidx = k_idx - 1;
+ rm_req.nLen = k + 4;
+ rm_req.tin0 = out0;
+ rm_req.tin1 = out1;
+ rm_req.tin2 = out2;
+ rm_resp.output = rm_out;
+ rm_resp.OutputLen = (e >> 3);
+ if (enc->op_flags & RTE_BBDEV_TURBO_RV_INDEX_BYPASS)
+ rm_req.bypass_rvidx = 1;
+ else
+ rm_req.bypass_rvidx = 0;
+
+ if (bblib_rate_match_dl(&rm_req, &rm_resp) != 0) {
+ op->status |= 1 << RTE_BBDEV_DRV_ERROR;
+ rte_bbdev_log(ERR, "Rate matching failed");
+ return;
+ }
+ enc->output.length += rm_resp.OutputLen;
+ } else {
+ /* Rate matching is bypassed */
+
+ /* Completing last byte of out0 (where 4 tail bits are stored)
+ * by moving first 4 bits from out1
+ */
+ tmp_out = (uint8_t *) --out1;
+ *tmp_out = *tmp_out | ((*(tmp_out + 1) & 0xF0) >> 4);
+ tmp_out++;
+ /* Shifting out1 data by 4 bits to the left */
+ for (m = 0; m < k >> 3; ++m) {
+ uint8_t *first = tmp_out;
+ uint8_t second = *(tmp_out + 1);
+ *first = (*first << 4) | ((second & 0xF0) >> 4);
+ tmp_out++;
+ }
+ /* Shifting out2 data by 8 bits to the left */
+ for (m = 0; m < (k >> 3) + 1; ++m) {
+ *tmp_out = *(tmp_out + 1);
+ tmp_out++;
+ }
+ *tmp_out = 0;
+
+ /* copy shifted output to turbo_enc entity */
+ out0 = (uint8_t *)rte_pktmbuf_append(m_out,
+ (k >> 3) * 3 + 2);
+ if (out0 == NULL) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ rte_bbdev_log(ERR,
+ "Too little space in output mbuf");
+ return;
+ }
+ enc->output.length += (k >> 3) * 3 + 2;
+ /* rte_bbdev_op_data.offset can be different than the
+ * offset of the appended bytes
+ */
+ out0 = rte_pktmbuf_mtod_offset(m_out, uint8_t *,
+ out_offset);
+ rte_memcpy(out0, q->enc_out, (k >> 3) * 3 + 2);
+ }
+}
+
+static inline void
+enqueue_enc_one_op(struct turbo_sw_queue *q, struct rte_bbdev_enc_op *op)
+{
+ uint8_t c, r, crc24_bits = 0;
+ uint16_t k, ncb;
+ uint32_t e;
+ struct rte_bbdev_op_turbo_enc *enc = &op->turbo_enc;
+ uint16_t in_offset = enc->input.offset;
+ uint16_t out_offset = enc->output.offset;
+ struct rte_mbuf *m_in = enc->input.data;
+ struct rte_mbuf *m_out = enc->output.data;
+ uint16_t total_left = enc->input.length;
+
+ /* Clear op status */
+ op->status = 0;
+
+ if (total_left > MAX_TB_SIZE >> 3) {
+ rte_bbdev_log(ERR, "TB size (%u) is too big, max: %d",
+ total_left, MAX_TB_SIZE);
+ op->status = 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+
+ if (m_in == NULL || m_out == NULL) {
+ rte_bbdev_log(ERR, "Invalid mbuf pointer");
+ op->status = 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+
+ if ((enc->op_flags & RTE_BBDEV_TURBO_CRC_24B_ATTACH) ||
+ (enc->op_flags & RTE_BBDEV_TURBO_CRC_24A_ATTACH))
+ crc24_bits = 24;
+
+ if (enc->code_block_mode == 0) { /* For Transport Block mode */
+ c = enc->tb_params.c;
+ r = enc->tb_params.r;
+ } else {/* For Code Block mode */
+ c = 1;
+ r = 0;
+ }
+
+ while (total_left > 0 && r < c) {
+ if (enc->code_block_mode == 0) {
+ k = (r < enc->tb_params.c_neg) ?
+ enc->tb_params.k_neg : enc->tb_params.k_pos;
+ ncb = (r < enc->tb_params.c_neg) ?
+ enc->tb_params.ncb_neg : enc->tb_params.ncb_pos;
+ e = (r < enc->tb_params.cab) ?
+ enc->tb_params.ea : enc->tb_params.eb;
+ } else {
+ k = enc->cb_params.k;
+ ncb = enc->cb_params.ncb;
+ e = enc->cb_params.e;
+ }
+
+ process_enc_cb(q, op, r, c, k, ncb, e, m_in,
+ m_out, in_offset, out_offset, total_left);
+ /* Update total_left */
+ total_left -= (k - crc24_bits) >> 3;
+ /* Update offsets for next CBs (if exist) */
+ in_offset += (k - crc24_bits) >> 3;
+ if (enc->op_flags & RTE_BBDEV_TURBO_RATE_MATCH)
+ out_offset += e >> 3;
+ else
+ out_offset += (k >> 3) * 3 + 2;
+ r++;
+ }
+
+ /* check if all input data was processed */
+ if (total_left != 0) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ rte_bbdev_log(ERR,
+ "Mismatch between mbuf length and included CBs sizes");
+ }
+}
+
+static inline uint16_t
+enqueue_enc_all_ops(struct turbo_sw_queue *q, struct rte_bbdev_enc_op **ops,
+ uint16_t nb_ops)
+{
+ uint16_t i;
+
+ for (i = 0; i < nb_ops; ++i)
+ enqueue_enc_one_op(q, ops[i]);
+
+ return rte_ring_enqueue_burst(q->processed_pkts, (void **)ops, nb_ops,
+ NULL);
+}
+
+/* Remove the padding bytes from a cyclic buffer.
+ * The input buffer is a data stream wk as described in 3GPP TS 36.212 section
+ * 5.1.4.1.2 starting from w0 and with length Ncb bytes.
+ * The output buffer is a data stream wk with pruned padding bytes. It's length
+ * is 3*D bytes and the order of non-padding bytes is preserved.
+ */
+static inline void
+remove_nulls_from_circular_buf(const uint8_t *in, uint8_t *out, uint16_t k,
+ uint16_t ncb)
+{
+ uint32_t in_idx, out_idx, c_idx;
+ const uint32_t d = k + 4;
+ const uint32_t kw = (ncb / 3);
+ const uint32_t nd = kw - d;
+ const uint32_t r_subblock = kw / C_SUBBLOCK;
+ /* Inter-column permutation pattern */
+ const uint32_t P[C_SUBBLOCK] = {0, 16, 8, 24, 4, 20, 12, 28, 2, 18, 10,
+ 26, 6, 22, 14, 30, 1, 17, 9, 25, 5, 21, 13, 29, 3, 19,
+ 11, 27, 7, 23, 15, 31};
+ in_idx = 0;
+ out_idx = 0;
+
+ /* The padding bytes are at the first Nd positions in the first row. */
+ for (c_idx = 0; in_idx < kw; in_idx += r_subblock, ++c_idx) {
+ if (P[c_idx] < nd) {
+ rte_memcpy(&out[out_idx], &in[in_idx + 1],
+ r_subblock - 1);
+ out_idx += r_subblock - 1;
+ } else {
+ rte_memcpy(&out[out_idx], &in[in_idx], r_subblock);
+ out_idx += r_subblock;
+ }
+ }
+
+ /* First and second parity bits sub-blocks are interlaced. */
+ for (c_idx = 0; in_idx < ncb - 2 * r_subblock;
+ in_idx += 2 * r_subblock, ++c_idx) {
+ uint32_t second_block_c_idx = P[c_idx];
+ uint32_t third_block_c_idx = P[c_idx] + 1;
+
+ if (second_block_c_idx < nd && third_block_c_idx < nd) {
+ rte_memcpy(&out[out_idx], &in[in_idx + 2],
+ 2 * r_subblock - 2);
+ out_idx += 2 * r_subblock - 2;
+ } else if (second_block_c_idx >= nd &&
+ third_block_c_idx >= nd) {
+ rte_memcpy(&out[out_idx], &in[in_idx], 2 * r_subblock);
+ out_idx += 2 * r_subblock;
+ } else if (second_block_c_idx < nd) {
+ out[out_idx++] = in[in_idx];
+ rte_memcpy(&out[out_idx], &in[in_idx + 2],
+ 2 * r_subblock - 2);
+ out_idx += 2 * r_subblock - 2;
+ } else {
+ rte_memcpy(&out[out_idx], &in[in_idx + 1],
+ 2 * r_subblock - 1);
+ out_idx += 2 * r_subblock - 1;
+ }
+ }
+
+ /* Last interlaced row is different - its last byte is the only padding
+ * byte. We can have from 2 up to 26 padding bytes (Nd) per sub-block.
+ * After interlacing the 1st and 2nd parity sub-blocks we can have 0, 1
+ * or 2 padding bytes each time we make a step of 2 * R_SUBBLOCK bytes
+ * (moving to another column). 2nd parity sub-block uses the same
+ * inter-column permutation pattern as the systematic and 1st parity
+ * sub-blocks but it adds '1' to the resulting index and calculates the
+ * modulus of the result and Kw. Last column is mapped to itself (id 31)
+ * so the first byte taken from the 2nd parity sub-block will be the
+ * 32nd (31+1) byte, then 64th etc. (step is C_SUBBLOCK == 32) and the
+ * last byte will be the first byte from the sub-block:
+ * (32 + 32 * (R_SUBBLOCK-1)) % Kw == Kw % Kw == 0. Nd can't be smaller
+ * than 2 so we know that bytes with ids 0 and 1 must be the padding
+ * bytes. The bytes from the 1st parity sub-block are the bytes from the
+ * 31st column - Nd can't be greater than 26 so we are sure that there
+ * are no padding bytes in 31st column.
+ */
+ rte_memcpy(&out[out_idx], &in[in_idx], 2 * r_subblock - 1);
+}
+
+static inline void
+move_padding_bytes(const uint8_t *in, uint8_t *out, uint16_t k,
+ uint16_t ncb)
+{
+ uint16_t d = k + 4;
+ uint16_t kpi = ncb / 3;
+ uint16_t nd = kpi - d;
+
+ rte_memcpy(&out[nd], in, d);
+ rte_memcpy(&out[nd + kpi + 64], &in[kpi], d);
+ rte_memcpy(&out[nd + 2 * (kpi + 64)], &in[2 * kpi], d);
+}
+
+static inline void
+process_dec_cb(struct turbo_sw_queue *q, struct rte_bbdev_dec_op *op,
+ uint8_t c, uint16_t k, uint16_t kw, struct rte_mbuf *m_in,
+ struct rte_mbuf *m_out, uint16_t in_offset, uint16_t out_offset,
+ bool check_crc_24b, uint16_t total_left)
+{
+ int ret;
+ int32_t k_idx;
+ int32_t iter_cnt;
+ uint8_t *in, *out, *adapter_input;
+ int32_t ncb, ncb_without_null;
+ struct bblib_turbo_adapter_ul_response adapter_resp;
+ struct bblib_turbo_adapter_ul_request adapter_req;
+ struct bblib_turbo_decoder_request turbo_req;
+ struct bblib_turbo_decoder_response turbo_resp;
+ struct rte_bbdev_op_turbo_dec *dec = &op->turbo_dec;
+
+ k_idx = compute_idx(k);
+
+ ret = is_dec_input_valid(k_idx, kw, total_left);
+ if (ret != 0) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+
+ in = rte_pktmbuf_mtod_offset(m_in, uint8_t *, in_offset);
+ ncb = kw;
+ ncb_without_null = (k + 4) * 3;
+
+ if (check_bit(dec->op_flags, RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE)) {
+ struct bblib_deinterleave_ul_request deint_req;
+ struct bblib_deinterleave_ul_response deint_resp;
+
+ /* SW decoder accepts only a circular buffer without NULL bytes
+ * so the input needs to be converted.
+ */
+ remove_nulls_from_circular_buf(in, q->deint_input, k, ncb);
+
+ deint_req.pharqbuffer = q->deint_input;
+ deint_req.ncb = ncb_without_null;
+ deint_resp.pinteleavebuffer = q->deint_output;
+ bblib_deinterleave_ul(&deint_req, &deint_resp);
+ } else
+ move_padding_bytes(in, q->deint_output, k, ncb);
+
+ adapter_input = q->deint_output;
+
+ if (dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN)
+ adapter_req.isinverted = 1;
+ else if (dec->op_flags & RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN)
+ adapter_req.isinverted = 0;
+ else {
+ op->status |= 1 << RTE_BBDEV_DRV_ERROR;
+ rte_bbdev_log(ERR, "LLR format wasn't specified");
+ return;
+ }
+
+ adapter_req.ncb = ncb_without_null;
+ adapter_req.pinteleavebuffer = adapter_input;
+ adapter_resp.pharqout = q->adapter_output;
+ bblib_turbo_adapter_ul(&adapter_req, &adapter_resp);
+
+ out = (uint8_t *)rte_pktmbuf_append(m_out, (k >> 3));
+ if (out == NULL) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ rte_bbdev_log(ERR, "Too little space in output mbuf");
+ return;
+ }
+ /* rte_bbdev_op_data.offset can be different than the offset of the
+ * appended bytes
+ */
+ out = rte_pktmbuf_mtod_offset(m_out, uint8_t *, out_offset);
+ if (check_crc_24b)
+ turbo_req.c = c + 1;
+ else
+ turbo_req.c = c;
+ turbo_req.input = (int8_t *)q->adapter_output;
+ turbo_req.k = k;
+ turbo_req.k_idx = k_idx;
+ turbo_req.max_iter_num = dec->iter_max;
+ turbo_resp.ag_buf = q->ag;
+ turbo_resp.cb_buf = q->code_block;
+ turbo_resp.output = out;
+ iter_cnt = bblib_turbo_decoder(&turbo_req, &turbo_resp);
+ dec->hard_output.length += (k >> 3);
+
+ if (iter_cnt > 0) {
+ /* Temporary solution for returned iter_count from SDK */
+ iter_cnt = (iter_cnt - 1) / 2;
+ dec->iter_count = RTE_MAX(iter_cnt, dec->iter_count);
+ } else {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ rte_bbdev_log(ERR, "Turbo Decoder failed");
+ return;
+ }
+}
+
+static inline void
+enqueue_dec_one_op(struct turbo_sw_queue *q, struct rte_bbdev_dec_op *op)
+{
+ uint8_t c, r = 0;
+ uint16_t kw, k = 0;
+ struct rte_bbdev_op_turbo_dec *dec = &op->turbo_dec;
+ struct rte_mbuf *m_in = dec->input.data;
+ struct rte_mbuf *m_out = dec->hard_output.data;
+ uint16_t in_offset = dec->input.offset;
+ uint16_t total_left = dec->input.length;
+ uint16_t out_offset = dec->hard_output.offset;
+
+ /* Clear op status */
+ op->status = 0;
+
+ if (m_in == NULL || m_out == NULL) {
+ rte_bbdev_log(ERR, "Invalid mbuf pointer");
+ op->status = 1 << RTE_BBDEV_DATA_ERROR;
+ return;
+ }
+
+ if (dec->code_block_mode == 0) { /* For Transport Block mode */
+ c = dec->tb_params.c;
+ } else { /* For Code Block mode */
+ k = dec->cb_params.k;
+ c = 1;
+ }
+
+ while (total_left > 0) {
+ if (dec->code_block_mode == 0)
+ k = (r < dec->tb_params.c_neg) ?
+ dec->tb_params.k_neg : dec->tb_params.k_pos;
+
+ /* Calculates circular buffer size (Kw).
+ * According to 3gpp 36.212 section 5.1.4.2
+ * Kw = 3 * Kpi,
+ * where:
+ * Kpi = nCol * nRow
+ * where nCol is 32 and nRow can be calculated from:
+ * D =< nCol * nRow
+ * where D is the size of each output from turbo encoder block
+ * (k + 4).
+ */
+ kw = RTE_ALIGN_CEIL(k + 4, C_SUBBLOCK) * 3;
+
+ process_dec_cb(q, op, c, k, kw, m_in, m_out, in_offset,
+ out_offset, check_bit(dec->op_flags,
+ RTE_BBDEV_TURBO_CRC_TYPE_24B), total_left);
+ /* As a result of decoding we get Code Block with included
+ * decoded CRC24 at the end of Code Block. Type of CRC24 is
+ * specified by flag.
+ */
+
+ /* Update total_left */
+ total_left -= kw;
+ /* Update offsets for next CBs (if exist) */
+ in_offset += kw;
+ out_offset += (k >> 3);
+ r++;
+ }
+ if (total_left != 0) {
+ op->status |= 1 << RTE_BBDEV_DATA_ERROR;
+ rte_bbdev_log(ERR,
+ "Mismatch between mbuf length and included Circular buffer sizes");
+ }
+}
+
+static inline uint16_t
+enqueue_dec_all_ops(struct turbo_sw_queue *q, struct rte_bbdev_dec_op **ops,
+ uint16_t nb_ops)
+{
+ uint16_t i;
+
+ for (i = 0; i < nb_ops; ++i)
+ enqueue_dec_one_op(q, ops[i]);
+
+ return rte_ring_enqueue_burst(q->processed_pkts, (void **)ops, nb_ops,
+ NULL);
+}
+
+/* Enqueue burst */
+static uint16_t
+enqueue_enc_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_enc_op **ops, uint16_t nb_ops)
+{
+ void *queue = q_data->queue_private;
+ struct turbo_sw_queue *q = queue;
+ uint16_t nb_enqueued = 0;
+
+ nb_enqueued = enqueue_enc_all_ops(q, ops, nb_ops);
+
+ q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued;
+ q_data->queue_stats.enqueued_count += nb_enqueued;
+
+ return nb_enqueued;
+}
+
+/* Enqueue burst */
+static uint16_t
+enqueue_dec_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_dec_op **ops, uint16_t nb_ops)
+{
+ void *queue = q_data->queue_private;
+ struct turbo_sw_queue *q = queue;
+ uint16_t nb_enqueued = 0;
+
+ nb_enqueued = enqueue_dec_all_ops(q, ops, nb_ops);
+
+ q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued;
+ q_data->queue_stats.enqueued_count += nb_enqueued;
+
+ return nb_enqueued;
+}
+
+/* Dequeue decode burst */
+static uint16_t
+dequeue_dec_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_dec_op **ops, uint16_t nb_ops)
+{
+ struct turbo_sw_queue *q = q_data->queue_private;
+ uint16_t nb_dequeued = rte_ring_dequeue_burst(q->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+ q_data->queue_stats.dequeued_count += nb_dequeued;
+
+ return nb_dequeued;
+}
+
+/* Dequeue encode burst */
+static uint16_t
+dequeue_enc_ops(struct rte_bbdev_queue_data *q_data,
+ struct rte_bbdev_enc_op **ops, uint16_t nb_ops)
+{
+ struct turbo_sw_queue *q = q_data->queue_private;
+ uint16_t nb_dequeued = rte_ring_dequeue_burst(q->processed_pkts,
+ (void **)ops, nb_ops, NULL);
+ q_data->queue_stats.dequeued_count += nb_dequeued;
+
+ return nb_dequeued;
+}
+
+/* Parse 16bit integer from string argument */
+static inline int
+parse_u16_arg(const char *key, const char *value, void *extra_args)
+{
+ uint16_t *u16 = extra_args;
+ unsigned int long result;
+
+ if ((value == NULL) || (extra_args == NULL))
+ return -EINVAL;
+ errno = 0;
+ result = strtoul(value, NULL, 0);
+ if ((result >= (1 << 16)) || (errno != 0)) {
+ rte_bbdev_log(ERR, "Invalid value %lu for %s", result, key);
+ return -ERANGE;
+ }
+ *u16 = (uint16_t)result;
+ return 0;
+}
+
+/* Parse parameters used to create device */
+static int
+parse_turbo_sw_params(struct turbo_sw_params *params, const char *input_args)
+{
+ struct rte_kvargs *kvlist = NULL;
+ int ret = 0;
+
+ if (params == NULL)
+ return -EINVAL;
+ if (input_args) {
+ kvlist = rte_kvargs_parse(input_args, turbo_sw_valid_params);
+ if (kvlist == NULL)
+ return -EFAULT;
+
+ ret = rte_kvargs_process(kvlist, turbo_sw_valid_params[0],
+ &parse_u16_arg, ¶ms->queues_num);
+ if (ret < 0)
+ goto exit;
+
+ ret = rte_kvargs_process(kvlist, turbo_sw_valid_params[1],
+ &parse_u16_arg, ¶ms->socket_id);
+ if (ret < 0)
+ goto exit;
+
+ if (params->socket_id >= RTE_MAX_NUMA_NODES) {
+ rte_bbdev_log(ERR, "Invalid socket, must be < %u",
+ RTE_MAX_NUMA_NODES);
+ goto exit;
+ }
+ }
+
+exit:
+ if (kvlist)
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+/* Create device */
+static int
+turbo_sw_bbdev_create(struct rte_vdev_device *vdev,
+ struct turbo_sw_params *init_params)
+{
+ struct rte_bbdev *bbdev;
+ const char *name = rte_vdev_device_name(vdev);
+
+ bbdev = rte_bbdev_allocate(name);
+ if (bbdev == NULL)
+ return -ENODEV;
+
+ bbdev->data->dev_private = rte_zmalloc_socket(name,
+ sizeof(struct bbdev_private), RTE_CACHE_LINE_SIZE,
+ init_params->socket_id);
+ if (bbdev->data->dev_private == NULL) {
+ rte_bbdev_release(bbdev);
+ return -ENOMEM;
+ }
+
+ bbdev->dev_ops = &pmd_ops;
+ bbdev->device = &vdev->device;
+ bbdev->data->socket_id = init_params->socket_id;
+ bbdev->intr_handle = NULL;
+
+ /* register rx/tx burst functions for data path */
+ bbdev->dequeue_enc_ops = dequeue_enc_ops;
+ bbdev->dequeue_dec_ops = dequeue_dec_ops;
+ bbdev->enqueue_enc_ops = enqueue_enc_ops;
+ bbdev->enqueue_dec_ops = enqueue_dec_ops;
+ ((struct bbdev_private *) bbdev->data->dev_private)->max_nb_queues =
+ init_params->queues_num;
+
+ return 0;
+}
+
+/* Initialise device */
+static int
+turbo_sw_bbdev_probe(struct rte_vdev_device *vdev)
+{
+ struct turbo_sw_params init_params = {
+ rte_socket_id(),
+ RTE_BBDEV_DEFAULT_MAX_NB_QUEUES
+ };
+ const char *name;
+ const char *input_args;
+
+ if (vdev == NULL)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ input_args = rte_vdev_device_args(vdev);
+ parse_turbo_sw_params(&init_params, input_args);
+
+ rte_bbdev_log_debug(
+ "Initialising %s on NUMA node %d with max queues: %d\n",
+ name, init_params.socket_id, init_params.queues_num);
+
+ return turbo_sw_bbdev_create(vdev, &init_params);
+}
+
+/* Uninitialise device */
+static int
+turbo_sw_bbdev_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_bbdev *bbdev;
+ const char *name;
+
+ if (vdev == NULL)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+
+ bbdev = rte_bbdev_get_named_dev(name);
+ if (bbdev == NULL)
+ return -EINVAL;
+
+ rte_free(bbdev->data->dev_private);
+
+ return rte_bbdev_release(bbdev);
+}
+
+static struct rte_vdev_driver bbdev_turbo_sw_pmd_drv = {
+ .probe = turbo_sw_bbdev_probe,
+ .remove = turbo_sw_bbdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(DRIVER_NAME, bbdev_turbo_sw_pmd_drv);
+RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME,
+ TURBO_SW_MAX_NB_QUEUES_ARG"=<int> "
+ TURBO_SW_SOCKET_ID_ARG"=<int>");
+
+int bbdev_logtype;
+RTE_INIT(null_bbdev_init_log);
+static void
+null_bbdev_init_log(void)
+{
+ bbdev_logtype = rte_log_register("pmd.bbdev.turbo_sw");
+ if (bbdev_logtype >= 0)
+ rte_log_set_level(bbdev_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/bbdev/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map b/drivers/bbdev/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
new file mode 100644
index 0000000..58b9427
--- /dev/null
+++ b/drivers/bbdev/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
@@ -0,0 +1,3 @@
+DPDK_18.02 {
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5c3444f..3c2d69c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -165,6 +165,14 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += -lrte_pmd_vmxnet3_uio
ifeq ($(CONFIG_RTE_LIBRTE_BBDEV),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_NULL) += -lrte_pmd_bbdev_null
+
+# TURBO SOFTWARE PMD is dependent on the FLEXRAN library
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -lrte_pmd_bbdev_turbo_sw
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_crc -lcrc
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_turbo -lturbo
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_rate_matching -lrate_matching
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -L$(FLEXRAN_SDK)/lib_common -lcommon
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW) += -lirc -limf -lstdc++ -lipps
endif # CONFIG_RTE_LIBRTE_BBDEV
ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v5 4/5] bbdev: test applications
2018-01-11 19:23 [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 2/5] bbdev: null device driver Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver Amr Mokhtar
@ 2018-01-11 19:23 ` Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 5/5] bbdev: sample app Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library Amr Mokhtar
4 siblings, 0 replies; 11+ messages in thread
From: Amr Mokhtar @ 2018-01-11 19:23 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara, Amr Mokhtar
- full test suite for bbdev
- test App works seamlessly on all PMDs registered with bbdev
framework
- a python script is provided to make our life easier
- supports execution of tests by parsing Test Vector files
- test Vectors can be added/deleted/modified with no need for
re-compilation
- various tests can be executed:
(a) Throughput test
(b) Offload latency test
(c) Operation latency test
(d) Validation test
(c) Sanity checks
Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
---
MAINTAINERS | 2 +
app/Makefile | 4 +
app/test-bbdev/Makefile | 22 +
app/test-bbdev/main.c | 325 +++
app/test-bbdev/main.h | 120 ++
app/test-bbdev/test-bbdev.py | 111 +
app/test-bbdev/test_bbdev.c | 1378 +++++++++++++
app/test-bbdev/test_bbdev_perf.c | 2136 ++++++++++++++++++++
app/test-bbdev/test_bbdev_vector.c | 937 +++++++++
app/test-bbdev/test_bbdev_vector.h | 71 +
app/test-bbdev/test_vectors/bbdev_vector_null.data | 5 +
.../test_vectors/bbdev_vector_td_default.data | 54 +
.../test_vectors/bbdev_vector_te_default.data | 33 +
config/common_base | 5 +
doc/guides/tools/index.rst | 1 +
doc/guides/tools/testbbdev.rst | 538 +++++
16 files changed, 5742 insertions(+)
create mode 100644 app/test-bbdev/Makefile
create mode 100644 app/test-bbdev/main.c
create mode 100644 app/test-bbdev/main.h
create mode 100755 app/test-bbdev/test-bbdev.py
create mode 100644 app/test-bbdev/test_bbdev.c
create mode 100644 app/test-bbdev/test_bbdev_perf.c
create mode 100644 app/test-bbdev/test_bbdev_vector.c
create mode 100644 app/test-bbdev/test_bbdev_vector.h
create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_null.data
create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_td_default.data
create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_te_default.data
create mode 100644 doc/guides/tools/testbbdev.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 8cb296e..81b883e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -275,8 +275,10 @@ BBDEV API - EXPERIMENTAL
M: Amr Mokhtar <amr.mokhtar@intel.com>
F: lib/librte_bbdev/
F: drivers/bbdev/
+F: app/test-bbdev/
F: doc/guides/bbdevs/
F: doc/guides/prog_guide/bbdev.rst
+F: doc/guides/tools/testbbdev.rst
Security API - EXPERIMENTAL
M: Akhil Goyal <akhil.goyal@nxp.com>
diff --git a/app/Makefile b/app/Makefile
index 24c9067..e370c4b 100644
--- a/app/Makefile
+++ b/app/Makefile
@@ -15,4 +15,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
DIRS-$(CONFIG_RTE_APP_EVENTDEV) += test-eventdev
endif
+ifeq ($(CONFIG_RTE_LIBRTE_BBDEV),y)
+DIRS-$(CONFIG_RTE_TEST_BBDEV) += test-bbdev
+endif
+
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/app/test-bbdev/Makefile b/app/test-bbdev/Makefile
new file mode 100644
index 0000000..5a6e36a
--- /dev/null
+++ b/app/test-bbdev/Makefile
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+APP = testbbdev
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+#
+# all sources are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_TEST_BBDEV) += main.c
+SRCS-$(CONFIG_RTE_TEST_BBDEV) += test_bbdev.c
+SRCS-$(CONFIG_RTE_TEST_BBDEV) += test_bbdev_perf.c
+SRCS-$(CONFIG_RTE_TEST_BBDEV) += test_bbdev_vector.c
+
+include $(RTE_SDK)/mk/rte.app.mk
diff --git a/app/test-bbdev/main.c b/app/test-bbdev/main.c
new file mode 100644
index 0000000..3f542a8
--- /dev/null
+++ b/app/test-bbdev/main.c
@@ -0,0 +1,325 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <getopt.h>
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_common.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_lcore.h>
+
+#include "main.h"
+
+/* Defines how many testcases can be specified as cmdline args */
+#define MAX_CMDLINE_TESTCASES 8
+
+static const char tc_sep = ',';
+
+static struct test_params {
+ struct test_command *test_to_run[MAX_CMDLINE_TESTCASES];
+ unsigned int num_tests;
+ unsigned int num_ops;
+ unsigned int burst_sz;
+ unsigned int num_lcores;
+ char test_vector_filename[PATH_MAX];
+} test_params;
+
+static struct test_commands_list commands_list =
+ TAILQ_HEAD_INITIALIZER(commands_list);
+
+void
+add_test_command(struct test_command *t)
+{
+ TAILQ_INSERT_TAIL(&commands_list, t, next);
+}
+
+int
+unit_test_suite_runner(struct unit_test_suite *suite)
+{
+ int test_result = TEST_SUCCESS;
+ unsigned int total = 0, skipped = 0, succeeded = 0, failed = 0;
+ uint64_t start, end;
+
+ printf(
+ "\n + ------------------------------------------------------- +\n");
+ printf(" + Starting Test Suite : %s\n", suite->suite_name);
+
+ start = rte_rdtsc_precise();
+
+ if (suite->setup) {
+ test_result = suite->setup();
+ if (test_result == TEST_FAILED) {
+ printf(" + Test suite setup %s failed!\n",
+ suite->suite_name);
+ printf(
+ " + ------------------------------------------------------- +\n");
+ return 1;
+ }
+ if (test_result == TEST_SKIPPED) {
+ printf(" + Test suite setup %s skipped!\n",
+ suite->suite_name);
+ printf(
+ " + ------------------------------------------------------- +\n");
+ return 0;
+ }
+ }
+
+ while (suite->unit_test_cases[total].testcase) {
+ if (suite->unit_test_cases[total].setup)
+ test_result = suite->unit_test_cases[total].setup();
+
+ if (test_result == TEST_SUCCESS)
+ test_result = suite->unit_test_cases[total].testcase();
+
+ if (suite->unit_test_cases[total].teardown)
+ suite->unit_test_cases[total].teardown();
+
+ if (test_result == TEST_SUCCESS) {
+ succeeded++;
+ printf(" + TestCase [%2d] : %s passed\n", total,
+ suite->unit_test_cases[total].name);
+ } else if (test_result == TEST_SKIPPED) {
+ skipped++;
+ printf(" + TestCase [%2d] : %s skipped\n", total,
+ suite->unit_test_cases[total].name);
+ } else {
+ failed++;
+ printf(" + TestCase [%2d] : %s failed\n", total,
+ suite->unit_test_cases[total].name);
+ }
+
+ total++;
+ }
+
+ /* Run test suite teardown */
+ if (suite->teardown)
+ suite->teardown();
+
+ end = rte_rdtsc_precise();
+
+ printf(" + ------------------------------------------------------- +\n");
+ printf(" + Test Suite Summary : %s\n", suite->suite_name);
+ printf(" + Tests Total : %2d\n", total);
+ printf(" + Tests Skipped : %2d\n", skipped);
+ printf(" + Tests Passed : %2d\n", succeeded);
+ printf(" + Tests Failed : %2d\n", failed);
+ printf(" + Tests Lasted : %lg ms\n",
+ ((end - start) * 1000) / (double)rte_get_tsc_hz());
+ printf(" + ------------------------------------------------------- +\n");
+
+ return (failed > 0) ? 1 : 0;
+}
+
+const char *
+get_vector_filename(void)
+{
+ return test_params.test_vector_filename;
+}
+
+unsigned int
+get_num_ops(void)
+{
+ return test_params.num_ops;
+}
+
+unsigned int
+get_burst_sz(void)
+{
+ return test_params.burst_sz;
+}
+
+unsigned int
+get_num_lcores(void)
+{
+ return test_params.num_lcores;
+}
+
+static void
+print_usage(const char *prog_name)
+{
+ struct test_command *t;
+
+ printf("Usage: %s [EAL params] [-- [-n/--num-ops NUM_OPS]\n"
+ "\t[-b/--burst-size BURST_SIZE]\n"
+ "\t[-v/--test-vector VECTOR_FILE]\n"
+ "\t[-c/--test-cases TEST_CASE[,TEST_CASE,...]]]\n",
+ prog_name);
+
+ printf("Available testcases: ");
+ TAILQ_FOREACH(t, &commands_list, next)
+ printf("%s ", t->command);
+ printf("\n");
+}
+
+static int
+parse_args(int argc, char **argv, struct test_params *tp)
+{
+ int opt, option_index;
+ unsigned int num_tests = 0;
+ bool test_cases_present = false;
+ bool test_vector_present = false;
+ struct test_command *t;
+ char *tokens[MAX_CMDLINE_TESTCASES];
+ int tc, ret;
+
+ static struct option lgopts[] = {
+ { "num-ops", 1, 0, 'n' },
+ { "burst-size", 1, 0, 'b' },
+ { "test-cases", 1, 0, 'c' },
+ { "test-vector", 1, 0, 'v' },
+ { "lcores", 1, 0, 'l' },
+ { "help", 0, 0, 'h' },
+ { NULL, 0, 0, 0 }
+ };
+
+ while ((opt = getopt_long(argc, argv, "hn:b:c:v:l:", lgopts,
+ &option_index)) != EOF)
+ switch (opt) {
+ case 'n':
+ TEST_ASSERT(strlen(optarg) > 0,
+ "Num of operations is not provided");
+ tp->num_ops = strtol(optarg, NULL, 10);
+ break;
+ case 'b':
+ TEST_ASSERT(strlen(optarg) > 0,
+ "Burst size is not provided");
+ tp->burst_sz = strtol(optarg, NULL, 10);
+ TEST_ASSERT(tp->burst_sz <= MAX_BURST,
+ "Burst size mustn't be greater than %u",
+ MAX_BURST);
+ break;
+ case 'c':
+ TEST_ASSERT(test_cases_present == false,
+ "Test cases provided more than once");
+ test_cases_present = true;
+
+ ret = rte_strsplit(optarg, strlen(optarg),
+ tokens, MAX_CMDLINE_TESTCASES, tc_sep);
+
+ TEST_ASSERT(ret <= MAX_CMDLINE_TESTCASES,
+ "Too many test cases (max=%d)",
+ MAX_CMDLINE_TESTCASES);
+
+ for (tc = 0; tc < ret; ++tc) {
+ /* Find matching test case */
+ TAILQ_FOREACH(t, &commands_list, next)
+ if (!strcmp(tokens[tc], t->command))
+ tp->test_to_run[num_tests] = t;
+
+ TEST_ASSERT(tp->test_to_run[num_tests] != NULL,
+ "Unknown test case: %s",
+ tokens[tc]);
+ ++num_tests;
+ }
+ break;
+ case 'v':
+ TEST_ASSERT(test_vector_present == false,
+ "Test vector provided more than once");
+ test_vector_present = true;
+
+ TEST_ASSERT(strlen(optarg) > 0,
+ "Config file name is null");
+
+ strncpy(tp->test_vector_filename, optarg,
+ sizeof(tp->test_vector_filename));
+ break;
+ case 'l':
+ TEST_ASSERT(strlen(optarg) > 0,
+ "Num of lcores is not provided");
+ tp->num_lcores = strtol(optarg, NULL, 10);
+ TEST_ASSERT(tp->num_lcores <= RTE_MAX_LCORE,
+ "Num of lcores mustn't be greater than %u",
+ RTE_MAX_LCORE);
+ break;
+ case 'h':
+ print_usage(argv[0]);
+ return 0;
+ default:
+ printf("ERROR: Unknown option: -%c\n", opt);
+ return -1;
+ }
+
+ if (tp->num_ops == 0) {
+ printf(
+ "WARNING: Num of operations was not provided or was set 0. Set to default (%u)\n",
+ DEFAULT_OPS);
+ tp->num_ops = DEFAULT_OPS;
+ }
+ if (tp->burst_sz == 0) {
+ printf(
+ "WARNING: Burst size was not provided or was set 0. Set to default (%u)\n",
+ DEFAULT_BURST);
+ tp->burst_sz = DEFAULT_BURST;
+ }
+ if (tp->num_lcores == 0) {
+ printf(
+ "WARNING: Num of lcores was not provided or was set 0. Set to value from RTE config (%u)\n",
+ rte_lcore_count());
+ tp->num_lcores = rte_lcore_count();
+ }
+
+ TEST_ASSERT(tp->burst_sz <= tp->num_ops,
+ "Burst size (%u) mustn't be greater than num ops (%u)",
+ tp->burst_sz, tp->num_ops);
+
+ tp->num_tests = num_tests;
+ return 0;
+}
+
+static int
+run_all_tests(void)
+{
+ int ret = TEST_SUCCESS;
+ struct test_command *t;
+
+ TAILQ_FOREACH(t, &commands_list, next)
+ ret |= t->callback();
+
+ return ret;
+}
+
+static int
+run_parsed_tests(struct test_params *tp)
+{
+ int ret = TEST_SUCCESS;
+ unsigned int i;
+
+ for (i = 0; i < tp->num_tests; ++i)
+ ret |= tp->test_to_run[i]->callback();
+
+ return ret;
+}
+
+int
+main(int argc, char **argv)
+{
+ int ret;
+
+ /* Init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ return 1;
+ argc -= ret;
+ argv += ret;
+
+ /* Parse application arguments (after the EAL ones) */
+ ret = parse_args(argc, argv, &test_params);
+ if (ret < 0) {
+ print_usage(argv[0]);
+ return 1;
+ }
+
+ rte_log_set_global_level(RTE_LOG_INFO);
+
+ /* If no argument provided - run all tests */
+ if (test_params.num_tests == 0)
+ return run_all_tests();
+ else
+ return run_parsed_tests(&test_params);
+}
diff --git a/app/test-bbdev/main.h b/app/test-bbdev/main.h
new file mode 100644
index 0000000..20a55ef
--- /dev/null
+++ b/app/test-bbdev/main.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _MAIN_H_
+#define _MAIN_H_
+
+#include <stddef.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_log.h>
+
+#define TEST_SUCCESS 0
+#define TEST_FAILED -1
+#define TEST_SKIPPED 1
+
+#define MAX_BURST 512U
+#define DEFAULT_BURST 32U
+#define DEFAULT_OPS 64U
+
+#define TEST_ASSERT(cond, msg, ...) do { \
+ if (!(cond)) { \
+ printf("TestCase %s() line %d failed: " \
+ msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+ return TEST_FAILED; \
+ } \
+} while (0)
+
+/* Compare two buffers (length in bytes) */
+#define TEST_ASSERT_BUFFERS_ARE_EQUAL(a, b, len, msg, ...) do { \
+ if (memcmp((a), (b), len)) { \
+ printf("TestCase %s() line %d failed: " \
+ msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+ rte_memdump(stdout, "Buffer A", (a), len); \
+ rte_memdump(stdout, "Buffer B", (b), len); \
+ return TEST_FAILED; \
+ } \
+} while (0)
+
+#define TEST_ASSERT_SUCCESS(val, msg, ...) do { \
+ typeof(val) _val = (val); \
+ if (!(_val == 0)) { \
+ printf("TestCase %s() line %d failed (err %d): " \
+ msg "\n", __func__, __LINE__, _val, \
+ ##__VA_ARGS__); \
+ return TEST_FAILED; \
+ } \
+} while (0)
+
+#define TEST_ASSERT_FAIL(val, msg, ...) \
+ TEST_ASSERT_SUCCESS(!(val), msg, ##__VA_ARGS__)
+
+#define TEST_ASSERT_NOT_NULL(val, msg, ...) do { \
+ if ((val) == NULL) { \
+ printf("TestCase %s() line %d failed (null): " \
+ msg "\n", __func__, __LINE__, ##__VA_ARGS__); \
+ return TEST_FAILED; \
+ } \
+} while (0)
+
+struct unit_test_case {
+ int (*setup)(void);
+ void (*teardown)(void);
+ int (*testcase)(void);
+ const char *name;
+};
+
+#define TEST_CASE(testcase) {NULL, NULL, testcase, #testcase}
+
+#define TEST_CASE_ST(setup, teardown, testcase) \
+ {setup, teardown, testcase, #testcase}
+
+#define TEST_CASES_END() {NULL, NULL, NULL, NULL}
+
+struct unit_test_suite {
+ const char *suite_name;
+ int (*setup)(void);
+ void (*teardown)(void);
+ struct unit_test_case unit_test_cases[];
+};
+
+int unit_test_suite_runner(struct unit_test_suite *suite);
+
+typedef int (test_callback)(void);
+TAILQ_HEAD(test_commands_list, test_command);
+struct test_command {
+ TAILQ_ENTRY(test_command) next;
+ const char *command;
+ test_callback *callback;
+};
+
+void add_test_command(struct test_command *t);
+
+/* Register a test function */
+#define REGISTER_TEST_COMMAND(name, testsuite) \
+ static int test_func_##name(void) \
+ { \
+ return unit_test_suite_runner(&testsuite); \
+ } \
+ static struct test_command test_struct_##name = { \
+ .command = RTE_STR(name), \
+ .callback = test_func_##name, \
+ }; \
+ static void __attribute__((constructor, used)) \
+ test_register_##name(void) \
+ { \
+ add_test_command(&test_struct_##name); \
+ }
+
+const char *get_vector_filename(void);
+
+unsigned int get_num_ops(void);
+
+unsigned int get_burst_sz(void);
+
+unsigned int get_num_lcores(void);
+
+#endif
diff --git a/app/test-bbdev/test-bbdev.py b/app/test-bbdev/test-bbdev.py
new file mode 100755
index 0000000..ce78149
--- /dev/null
+++ b/app/test-bbdev/test-bbdev.py
@@ -0,0 +1,111 @@
+#!/usr/bin/env python
+
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+import sys
+import os
+import argparse
+import subprocess
+import shlex
+
+from threading import Timer
+
+def kill(process):
+ print "ERROR: Test app timed out"
+ process.kill()
+
+if "RTE_SDK" in os.environ:
+ dpdk_path = os.environ["RTE_SDK"]
+else:
+ dpdk_path = "../.."
+
+if "RTE_TARGET" in os.environ:
+ dpdk_target = os.environ["RTE_TARGET"]
+else:
+ dpdk_target = "x86_64-native-linuxapp-gcc"
+
+parser = argparse.ArgumentParser(
+ description='BBdev Unit Test Application',
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+parser.add_argument("-p", "--testapp-path",
+ help="specifies path to the bbdev test app",
+ default=dpdk_path + "/" + dpdk_target + "/app/testbbdev")
+parser.add_argument("-e", "--eal-params",
+ help="EAL arguments which are passed to the test app",
+ default="--vdev=bbdev_null0")
+parser.add_argument("-t", "--timeout",
+ type=int,
+ help="Timeout in seconds",
+ default=300)
+parser.add_argument("-c", "--test-cases",
+ nargs="+",
+ help="Defines test cases to run. Run all if not specified")
+parser.add_argument("-v", "--test-vector",
+ nargs="+",
+ help="Specifies paths to the test vector files.",
+ default=[dpdk_path +
+ "/app/test-bbdev/test_vectors/bbdev_vector_null.data"])
+parser.add_argument("-n", "--num-ops",
+ type=int,
+ help="Number of operations to process on device.",
+ default=32)
+parser.add_argument("-b", "--burst-size",
+ nargs="+",
+ type=int,
+ help="Operations enqueue/dequeue burst size.",
+ default=[32])
+parser.add_argument("-l", "--num-lcores",
+ type=int,
+ help="Number of lcores to run.",
+ default=16)
+
+args = parser.parse_args()
+
+if not os.path.exists(args.testapp_path):
+ print "No such file: " + args.testapp_path
+ sys.exit(1)
+
+params = [args.testapp_path]
+if args.eal_params:
+ params.extend(shlex.split(args.eal_params))
+
+params.extend(["--"])
+
+if args.num_ops:
+ params.extend(["-n", str(args.num_ops)])
+
+if args.num_lcores:
+ params.extend(["-l", str(args.num_lcores)])
+
+if args.test_cases:
+ params.extend(["-c"])
+ params.extend([",".join(args.test_cases)])
+
+exit_status = 0
+for vector in args.test_vector:
+ for burst_size in args.burst_size:
+ call_params = params[:]
+ call_params.extend(["-v", vector])
+ call_params.extend(["-b", str(burst_size)])
+ params_string = " ".join(call_params)
+
+ print("Executing: {}".format(params_string))
+ app_proc = subprocess.Popen(call_params)
+ if args.timeout > 0:
+ timer = Timer(args.timeout, kill, [app_proc])
+ timer.start()
+
+ try:
+ app_proc.communicate()
+ except:
+ print("Error: failed to execute: {}".format(params_string))
+ finally:
+ timer.cancel()
+
+ if app_proc.returncode != 0:
+ exit_status = 1
+ print("ERROR TestCase failed. Failed test for vector {}. Return code: {}".format(
+ vector, app_proc.returncode))
+
+sys.exit(exit_status)
diff --git a/app/test-bbdev/test_bbdev.c b/app/test-bbdev/test_bbdev.c
new file mode 100644
index 0000000..10579ea
--- /dev/null
+++ b/app/test-bbdev/test_bbdev.c
@@ -0,0 +1,1378 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_cycles.h>
+
+#include <rte_bus_vdev.h>
+
+#include <rte_bbdev.h>
+#include <rte_bbdev_op.h>
+#include <rte_bbdev_pmd.h>
+
+#include "main.h"
+
+
+#define BBDEV_NAME_NULL ("bbdev_null")
+
+struct bbdev_testsuite_params {
+ struct rte_bbdev_queue_conf qconf;
+};
+
+static struct bbdev_testsuite_params testsuite_params;
+
+static uint8_t null_dev_id;
+
+static int
+testsuite_setup(void)
+{
+ uint8_t nb_devs;
+ int ret;
+ char buf[RTE_BBDEV_NAME_MAX_LEN];
+
+ /* Create test device */
+ snprintf(buf, sizeof(buf), "%s_unittest", BBDEV_NAME_NULL);
+ ret = rte_vdev_init(buf, NULL);
+ TEST_ASSERT(ret == 0, "Failed to create instance of pmd: %s", buf);
+
+ nb_devs = rte_bbdev_count();
+ TEST_ASSERT(nb_devs != 0, "No devices found");
+
+ /* Most recently created device is our device */
+ null_dev_id = nb_devs - 1;
+
+ return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+ char buf[RTE_BBDEV_NAME_MAX_LEN];
+
+ snprintf(buf, sizeof(buf), "%s_unittest", BBDEV_NAME_NULL);
+ rte_vdev_uninit(buf);
+}
+
+static int
+ut_setup(void)
+{
+ struct bbdev_testsuite_params *ts_params = &testsuite_params;
+ uint8_t num_queues;
+
+ /* Valid queue configuration */
+ ts_params->qconf.priority = 0;
+ ts_params->qconf.socket = SOCKET_ID_ANY;
+ ts_params->qconf.deferred_start = 1;
+
+ num_queues = 1;
+ TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(null_dev_id, num_queues,
+ SOCKET_ID_ANY), "Failed to setup queues for bbdev %u",
+ 0);
+
+ /* Start the device */
+ TEST_ASSERT_SUCCESS(rte_bbdev_start(null_dev_id),
+ "Failed to start bbdev %u", 0);
+
+ return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+ rte_bbdev_close(null_dev_id);
+}
+
+static int
+test_bbdev_configure_invalid_dev_id(void)
+{
+ uint8_t dev_id;
+ uint8_t num_queues;
+
+ num_queues = 1;
+ for (dev_id = 0; dev_id < RTE_BBDEV_MAX_DEVS; dev_id++) {
+ if (!rte_bbdev_is_valid(dev_id)) {
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id,
+ num_queues, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "invalid dev_num %u", dev_id);
+ TEST_ASSERT(rte_bbdev_intr_enable(dev_id) == -ENODEV,
+ "Failed test for rte_bbdev_intr_enable: "
+ "invalid dev_num %u", dev_id);
+ break;
+ }
+ }
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_invalid_num_queues(void)
+{
+ struct rte_bbdev_info info;
+ uint8_t dev_id, num_devs;
+ uint8_t num_queues;
+ int return_value;
+
+ TEST_ASSERT((num_devs = rte_bbdev_count()) >= 1,
+ "Need at least %d devices for test", 1);
+
+ /* valid num_queues values */
+ num_queues = 8;
+
+ /* valid dev_id values */
+ dev_id = null_dev_id;
+
+ /* Stop the device in case it's started so it can be configured */
+ rte_bbdev_stop(dev_id);
+
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 0, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "invalid num_queues %d", 0);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id, num_queues,
+ SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "invalid dev_num %u", dev_id);
+
+ TEST_ASSERT_FAIL(return_value = rte_bbdev_info_get(dev_id, NULL),
+ "Failed test for rte_bbdev_info_get: "
+ "returned value:%i", return_value);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value:%i", return_value);
+
+ TEST_ASSERT(info.num_queues == num_queues,
+ "Failed test for rte_bbdev_info_get: "
+ "invalid num_queues:%u", info.num_queues);
+
+ num_queues = info.drv.max_num_queues;
+ TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id, num_queues,
+ SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "invalid num_queues: %u", num_queues);
+
+ num_queues++;
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, num_queues,
+ SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "invalid num_queues: %u", num_queues);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_stop_device(void)
+{
+ struct rte_bbdev_info info;
+ uint8_t dev_id;
+ int return_value;
+
+ /* valid dev_id values */
+ dev_id = null_dev_id;
+
+ /* Stop the device so it can be configured */
+ rte_bbdev_stop(dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value from "
+ "rte_bbdev_info_get function: %i", return_value);
+
+ TEST_ASSERT_SUCCESS(info.started, "Failed test for rte_bbdev_info_get: "
+ "started value: %u", info.started);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id,
+ info.drv.max_num_queues, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "device should be stopped, dev_id: %u", dev_id);
+
+ return_value = rte_bbdev_intr_enable(dev_id);
+ TEST_ASSERT(return_value != -EBUSY,
+ "Failed test for rte_bbdev_intr_enable: device should be stopped, dev_id: %u",
+ dev_id);
+
+ /* Start the device so it cannot be configured */
+ TEST_ASSERT_FAIL(rte_bbdev_start(RTE_BBDEV_MAX_DEVS),
+ "Failed to start bbdev %u", dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+ "Failed to start bbdev %u", dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value from "
+ "rte_bbdev_info_get function: %i", return_value);
+
+ TEST_ASSERT_FAIL(info.started, "Failed test for rte_bbdev_info_get: "
+ "started value: %u", info.started);
+
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id,
+ info.drv.max_num_queues, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "device should be started, dev_id: %u", dev_id);
+
+ return_value = rte_bbdev_intr_enable(dev_id);
+ TEST_ASSERT(return_value == -EBUSY,
+ "Failed test for rte_bbdev_intr_enable: device should be started, dev_id: %u",
+ dev_id);
+
+ /* Stop again the device so it can be once again configured */
+ TEST_ASSERT_FAIL(rte_bbdev_stop(RTE_BBDEV_MAX_DEVS),
+ "Failed to start bbdev %u", dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id), "Failed to stop bbdev %u",
+ dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value from "
+ "rte_bbdev_info_get function: %i", return_value);
+
+ TEST_ASSERT_SUCCESS(info.started, "Failed test for rte_bbdev_info_get: "
+ "started value: %u", info.started);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id,
+ info.drv.max_num_queues, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "device should be stopped, dev_id: %u", dev_id);
+
+ return_value = rte_bbdev_intr_enable(dev_id);
+ TEST_ASSERT(return_value != -EBUSY,
+ "Failed test for rte_bbdev_intr_enable: device should be stopped, dev_id: %u",
+ dev_id);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_stop_queue(void)
+{
+ struct bbdev_testsuite_params *ts_params = &testsuite_params;
+ struct rte_bbdev_info info;
+ struct rte_bbdev_queue_info qinfo;
+ uint8_t dev_id;
+ uint16_t queue_id;
+ int return_value;
+
+ /* Valid dev_id values */
+ dev_id = null_dev_id;
+
+ /* Valid queue_id values */
+ queue_id = 0;
+
+ rte_bbdev_stop(dev_id);
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value:%i", return_value);
+
+ /* Valid queue configuration */
+ ts_params->qconf.queue_size = info.drv.queue_size_lim;
+ ts_params->qconf.priority = info.drv.max_queue_priority;
+
+ /* Device - started; queue - started */
+ rte_bbdev_start(dev_id);
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "queue:%u on device:%u should be stopped",
+ queue_id, dev_id);
+
+ /* Device - stopped; queue - started */
+ rte_bbdev_stop(dev_id);
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "queue:%u on device:%u should be stopped",
+ queue_id, dev_id);
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_stop(RTE_BBDEV_MAX_DEVS, queue_id),
+ "Failed test for rte_bbdev_queue_stop "
+ "invalid dev_id ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_stop(dev_id, RTE_MAX_QUEUES_PER_PORT),
+ "Failed test for rte_bbdev_queue_stop "
+ "invalid queue_id ");
+
+ /* Device - stopped; queue - stopped */
+ rte_bbdev_queue_stop(dev_id, queue_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "queue:%u on device:%u should be stopped", queue_id,
+ dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
+ queue_id, &qinfo),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value from "
+ "rte_bbdev_queue_info_get function: %i", return_value);
+
+ TEST_ASSERT(qinfo.conf.socket == ts_params->qconf.socket,
+ "Failed test for rte_bbdev_queue_info_get: "
+ "invalid queue_size:%u", qinfo.conf.socket);
+
+ TEST_ASSERT(qinfo.conf.queue_size == ts_params->qconf.queue_size,
+ "Failed test for rte_bbdev_queue_info_get: "
+ "invalid queue_size:%u", qinfo.conf.queue_size);
+
+ TEST_ASSERT(qinfo.conf.priority == ts_params->qconf.priority,
+ "Failed test for rte_bbdev_queue_info_get: "
+ "invalid queue_size:%u", qinfo.conf.priority);
+
+ TEST_ASSERT(qinfo.conf.deferred_start ==
+ ts_params->qconf.deferred_start,
+ "Failed test for rte_bbdev_queue_info_get: "
+ "invalid queue_size:%u", qinfo.conf.deferred_start);
+
+ /* Device - started; queue - stopped */
+ rte_bbdev_start(dev_id);
+ rte_bbdev_queue_stop(dev_id, queue_id);
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "queue:%u on device:%u should be stopped", queue_id,
+ dev_id);
+
+ rte_bbdev_stop(dev_id);
+
+ /* After rte_bbdev_start(dev_id):
+ * - queue should be still stopped if deferred_start ==
+ */
+ rte_bbdev_start(dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
+ queue_id, &qinfo),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value from "
+ "rte_bbdev_queue_info_get function: %i", return_value);
+
+ TEST_ASSERT(qinfo.started == 0,
+ "Failed test for rte_bbdev_queue_info_get: "
+ "invalid value for qinfo.started:%u", qinfo.started);
+
+ rte_bbdev_stop(dev_id);
+
+ /* After rte_bbdev_start(dev_id):
+ * - queue should be started if deferred_start ==
+ */
+ ts_params->qconf.deferred_start = 0;
+ rte_bbdev_queue_configure(dev_id, queue_id, &ts_params->qconf);
+ rte_bbdev_start(dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
+ queue_id, &qinfo),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value from "
+ "rte_bbdev_queue_info_get function: %i", return_value);
+
+ TEST_ASSERT(qinfo.started == 1,
+ "Failed test for rte_bbdev_queue_info_get: "
+ "invalid value for qinfo.started:%u", qinfo.started);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_configure_invalid_queue_configure(void)
+{
+ struct bbdev_testsuite_params *ts_params = &testsuite_params;
+ int return_value;
+ struct rte_bbdev_info info;
+ uint8_t dev_id;
+ uint16_t queue_id;
+
+ /* Valid dev_id values */
+ dev_id = null_dev_id;
+
+ /* Valid queue_id values */
+ queue_id = 0;
+
+ rte_bbdev_stop(dev_id);
+
+ TEST_ASSERT_SUCCESS(return_value = rte_bbdev_info_get(dev_id, &info),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid return value:%i", return_value);
+
+ rte_bbdev_queue_stop(dev_id, queue_id);
+
+ ts_params->qconf.queue_size = info.drv.queue_size_lim + 1;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid value qconf.queue_size: %u",
+ ts_params->qconf.queue_size);
+
+ ts_params->qconf.queue_size = info.drv.queue_size_lim;
+ ts_params->qconf.priority = info.drv.max_queue_priority + 1;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid value qconf.queue_size: %u",
+ ts_params->qconf.queue_size);
+
+ ts_params->qconf.priority = info.drv.max_queue_priority;
+ queue_id = info.num_queues;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid value queue_id: %u", queue_id);
+
+ queue_id = 0;
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id, NULL),
+ "Failed test for rte_bbdev_queue_configure: "
+ "NULL qconf structure ");
+
+ ts_params->qconf.socket = RTE_MAX_NUMA_NODES;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid socket number ");
+
+ ts_params->qconf.socket = SOCKET_ID_ANY;
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid value qconf.queue_size: %u",
+ ts_params->qconf.queue_size);
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(RTE_BBDEV_MAX_DEVS, queue_id,
+ &ts_params->qconf),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid dev_id");
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id, NULL),
+ "Failed test for rte_bbdev_queue_configure: "
+ "invalid value qconf.queue_size: %u",
+ ts_params->qconf.queue_size);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_op_pool(void)
+{
+ struct rte_mempool *mp;
+
+ unsigned int dec_size = sizeof(struct rte_bbdev_dec_op);
+ unsigned int enc_size = sizeof(struct rte_bbdev_enc_op);
+
+ const char *pool_dec = "Test_DEC";
+ const char *pool_enc = "Test_ENC";
+
+ /* Valid pool configuration */
+ uint32_t size = 256;
+ uint32_t cache_size = 128;
+
+ TEST_ASSERT(rte_bbdev_op_pool_create(NULL,
+ RTE_BBDEV_OP_TURBO_DEC, size, cache_size, 0) == NULL,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "NULL name parameter");
+
+ TEST_ASSERT((mp = rte_bbdev_op_pool_create(pool_dec,
+ RTE_BBDEV_OP_TURBO_DEC, size, cache_size, 0)) != NULL,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "returned value is empty");
+
+ TEST_ASSERT(mp->size == size,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "invalid size of the mempool, mp->size: %u", mp->size);
+
+ TEST_ASSERT(mp->cache_size == cache_size,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "invalid size of the mempool, mp->size: %u",
+ mp->cache_size);
+
+ TEST_ASSERT_SUCCESS(strcmp(mp->name, pool_dec),
+ "Failed test for rte_bbdev_op_pool_create: "
+ "invalid name of mempool, mp->name: %s", mp->name);
+
+ TEST_ASSERT(mp->elt_size == dec_size,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "invalid element size for RTE_BBDEV_OP_TURBO_DEC, "
+ "mp->elt_size: %u", mp->elt_size);
+
+ rte_mempool_free(mp);
+
+ TEST_ASSERT((mp = rte_bbdev_op_pool_create(pool_enc,
+ RTE_BBDEV_OP_TURBO_ENC, size, cache_size, 0)) != NULL,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "returned value is empty");
+
+ TEST_ASSERT(mp->elt_size == enc_size,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "invalid element size for RTE_BBDEV_OP_TURBO_ENC, "
+ "mp->elt_size: %u", mp->elt_size);
+
+ rte_mempool_free(mp);
+
+ TEST_ASSERT((mp = rte_bbdev_op_pool_create("Test_NONE",
+ RTE_BBDEV_OP_NONE, size, cache_size, 0)) != NULL,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "returned value is empty for RTE_BBDEV_OP_NONE");
+
+ TEST_ASSERT(mp->elt_size == (enc_size > dec_size ? enc_size : dec_size),
+ "Failed test for rte_bbdev_op_pool_create: "
+ "invalid size for RTE_BBDEV_OP_NONE, mp->elt_size: %u",
+ mp->elt_size);
+
+ rte_mempool_free(mp);
+
+ TEST_ASSERT((mp = rte_bbdev_op_pool_create("Test_INV",
+ RTE_BBDEV_OP_TYPE_COUNT, size, cache_size, 0)) == NULL,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "returned value is not NULL for invalid type");
+
+ /* Invalid pool configuration */
+ size = 128;
+ cache_size = 256;
+
+ TEST_ASSERT((mp = rte_bbdev_op_pool_create("Test_InvSize",
+ RTE_BBDEV_OP_NONE, size, cache_size, 0)) == NULL,
+ "Failed test for rte_bbdev_op_pool_create: "
+ "returned value should be empty "
+ "because size of per-lcore local cache "
+ "is greater than size of the mempool.");
+
+ return TEST_SUCCESS;
+}
+
+/**
+ * Create pool of OP types RTE_BBDEV_OP_NONE, RTE_BBDEV_OP_TURBO_DEC and
+ * RTE_BBDEV_OP_TURBO_ENC and check that only ops of that type can be
+ * allocated
+ */
+static int
+test_bbdev_op_type(void)
+{
+ struct rte_mempool *mp_dec;
+
+ const unsigned int OPS_COUNT = 32;
+ struct rte_bbdev_dec_op *dec_ops_arr[OPS_COUNT];
+ struct rte_bbdev_enc_op *enc_ops_arr[OPS_COUNT];
+
+ const char *pool_dec = "Test_op_dec";
+
+ /* Valid pool configuration */
+ uint32_t num_elements = 256;
+ uint32_t cache_size = 128;
+
+ /* mempool type : RTE_BBDEV_OP_TURBO_DEC */
+ mp_dec = rte_bbdev_op_pool_create(pool_dec,
+ RTE_BBDEV_OP_TURBO_DEC, num_elements, cache_size, 0);
+ TEST_ASSERT(mp_dec != NULL, "Failed to create %s mempool", pool_dec);
+
+ TEST_ASSERT(rte_bbdev_dec_op_alloc_bulk(mp_dec, dec_ops_arr, 1) == 0,
+ "Failed test for rte_bbdev_op_alloc_bulk TURBO_DEC: "
+ "OPs type: RTE_BBDEV_OP_TURBO_DEC");
+
+ TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_dec, enc_ops_arr, 1) != 0,
+ "Failed test for rte_bbdev_op_alloc_bulk TURBO_DEC: "
+ "OPs type: RTE_BBDEV_OP_TURBO_ENC");
+
+ rte_mempool_free(mp_dec);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_op_pool_size(void)
+{
+ struct rte_mempool *mp_none;
+
+ const unsigned int OPS_COUNT = 128;
+ struct rte_bbdev_enc_op *ops_enc_arr[OPS_COUNT];
+ struct rte_bbdev_enc_op *ops_ext_arr[OPS_COUNT];
+ struct rte_bbdev_enc_op *ops_ext2_arr[OPS_COUNT];
+
+ const char *pool_none = "Test_pool_size";
+
+ /* Valid pool configuration */
+ uint32_t num_elements = 256;
+ uint32_t cache_size = 0;
+
+ /* Create mempool type : RTE_BBDEV_OP_TURBO_ENC, size : 256 */
+ mp_none = rte_bbdev_op_pool_create(pool_none, RTE_BBDEV_OP_TURBO_ENC,
+ num_elements, cache_size, 0);
+ TEST_ASSERT(mp_none != NULL, "Failed to create %s mempool", pool_none);
+
+ /* Add 128 RTE_BBDEV_OP_TURBO_ENC ops */
+ rte_bbdev_enc_op_alloc_bulk(mp_none, ops_enc_arr, OPS_COUNT);
+
+ /* Add 128 RTE_BBDEV_OP_TURBO_ENC ops */
+ TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_none, ops_ext_arr,
+ OPS_COUNT) == 0,
+ "Failed test for allocating bbdev ops: "
+ "Mempool size: 256, Free : 128, Attempted to add: 128");
+
+ /* Try adding 128 more RTE_BBDEV_OP_TURBO_ENC ops, this should fail */
+ TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_none, ops_ext2_arr,
+ OPS_COUNT) != 0,
+ "Failed test for allocating bbdev ops: "
+ "Mempool size: 256, Free : 0, Attempted to add: 128");
+
+ /* Free-up 128 RTE_BBDEV_OP_TURBO_ENC ops */
+ rte_bbdev_enc_op_free_bulk(ops_enc_arr, OPS_COUNT);
+
+ /* Try adding 128 RTE_BBDEV_OP_TURBO_DEC ops, this should succeed */
+ /* Cache size > 0 causes reallocation of ops size > 127 fail */
+ TEST_ASSERT(rte_bbdev_enc_op_alloc_bulk(mp_none, ops_ext2_arr,
+ OPS_COUNT) == 0,
+ "Failed test for allocating ops after mempool freed: "
+ "Mempool size: 256, Free : 128, Attempted to add: 128");
+
+ rte_mempool_free(mp_none);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_count(void)
+{
+ uint8_t num_devs, num_valid_devs = 0;
+
+ for (num_devs = 0; num_devs < RTE_BBDEV_MAX_DEVS; num_devs++) {
+ if (rte_bbdev_is_valid(num_devs))
+ num_valid_devs++;
+ }
+
+ num_devs = rte_bbdev_count();
+ TEST_ASSERT(num_valid_devs == num_devs,
+ "Failed test for rte_bbdev_is_valid: "
+ "invalid num_devs %u ", num_devs);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_stats(void)
+{
+ uint8_t dev_id = null_dev_id;
+ uint16_t queue_id = 0;
+ struct rte_bbdev_dec_op *dec_ops[4096] = { 0 };
+ struct rte_bbdev_dec_op *dec_proc_ops[4096] = { 0 };
+ struct rte_bbdev_enc_op *enc_ops[4096] = { 0 };
+ struct rte_bbdev_enc_op *enc_proc_ops[4096] = { 0 };
+ uint16_t num_ops = 236;
+ struct rte_bbdev_stats stats;
+ struct bbdev_testsuite_params *ts_params = &testsuite_params;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_stop(dev_id, queue_id),
+ "Failed to stop queue %u on device %u ", queue_id,
+ dev_id);
+ TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id),
+ "Failed to stop bbdev %u ", dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed to configure queue %u on device %u ",
+ queue_id, dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+ "Failed to start bbdev %u ", dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_start(dev_id, queue_id),
+ "Failed to start queue %u on device %u ", queue_id,
+ dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_start(dev_id, queue_id),
+ "Failed to start queue %u on device %u ", queue_id,
+ dev_id);
+
+ /* Tests after enqueue operation */
+ rte_bbdev_enqueue_enc_ops(dev_id, queue_id, enc_ops, num_ops);
+ rte_bbdev_enqueue_dec_ops(dev_id, queue_id, dec_ops, num_ops);
+
+ TEST_ASSERT_FAIL(rte_bbdev_stats_get(RTE_BBDEV_MAX_DEVS, &stats),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+
+ TEST_ASSERT_FAIL(rte_bbdev_stats_get(dev_id, NULL),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+
+ TEST_ASSERT(stats.enqueued_count == 2 * num_ops,
+ "Failed test for rte_bbdev_enqueue_ops: "
+ "invalid enqueued_count %" PRIu64 " ",
+ stats.enqueued_count);
+
+ TEST_ASSERT(stats.dequeued_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid dequeued_count %" PRIu64 " ",
+ stats.dequeued_count);
+
+ /* Tests after dequeue operation */
+ rte_bbdev_dequeue_enc_ops(dev_id, queue_id, enc_proc_ops, num_ops);
+ rte_bbdev_dequeue_dec_ops(dev_id, queue_id, dec_proc_ops, num_ops);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+
+ TEST_ASSERT(stats.dequeued_count == 2 * num_ops,
+ "Failed test for rte_bbdev_dequeue_ops: "
+ "invalid enqueued_count %" PRIu64 " ",
+ stats.dequeued_count);
+
+ TEST_ASSERT(stats.enqueue_err_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid enqueue_err_count %" PRIu64 " ",
+ stats.enqueue_err_count);
+
+ TEST_ASSERT(stats.dequeue_err_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid dequeue_err_count %" PRIu64 " ",
+ stats.dequeue_err_count);
+
+ /* Tests after reset operation */
+ TEST_ASSERT_FAIL(rte_bbdev_stats_reset(RTE_BBDEV_MAX_DEVS),
+ "Failed to reset statistic for device %u ", dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+ "Failed to reset statistic for device %u ", dev_id);
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+
+ TEST_ASSERT(stats.enqueued_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid enqueued_count %" PRIu64 " ",
+ stats.enqueued_count);
+
+ TEST_ASSERT(stats.dequeued_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid dequeued_count %" PRIu64 " ",
+ stats.dequeued_count);
+
+ TEST_ASSERT(stats.enqueue_err_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid enqueue_err_count %" PRIu64 " ",
+ stats.enqueue_err_count);
+
+ TEST_ASSERT(stats.dequeue_err_count == 0,
+ "Failed test for rte_bbdev_stats_reset: "
+ "invalid dequeue_err_count %" PRIu64 " ",
+ stats.dequeue_err_count);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_driver_init(void)
+{
+ struct rte_bbdev *dev1, *dev2;
+ const char *name = "dev_name";
+ char name_tmp[16];
+ int num_devs, num_devs_tmp;
+
+ dev1 = rte_bbdev_allocate(NULL);
+ TEST_ASSERT(dev1 == NULL,
+ "Failed initialize bbdev driver with NULL name");
+
+ dev1 = rte_bbdev_allocate(name);
+ TEST_ASSERT(dev1 != NULL, "Failed to initialize bbdev driver");
+
+ dev2 = rte_bbdev_allocate(name);
+ TEST_ASSERT(dev2 == NULL,
+ "Failed to initialize bbdev driver: "
+ "driver with the same name has been initialized before");
+
+ num_devs = rte_bbdev_count() - 1;
+ num_devs_tmp = num_devs;
+
+ /* Initialize the maximum amount of devices */
+ do {
+ sprintf(name_tmp, "%s%i", "name_", num_devs);
+ dev2 = rte_bbdev_allocate(name_tmp);
+ TEST_ASSERT(dev2 != NULL,
+ "Failed to initialize bbdev driver");
+ ++num_devs;
+ } while (num_devs < (RTE_BBDEV_MAX_DEVS - 1));
+
+ sprintf(name_tmp, "%s%i", "name_", num_devs);
+ dev2 = rte_bbdev_allocate(name_tmp);
+ TEST_ASSERT(dev2 == NULL, "Failed to initialize bbdev driver number %d "
+ "more drivers than RTE_BBDEV_MAX_DEVS: %d ", num_devs,
+ RTE_BBDEV_MAX_DEVS);
+
+ num_devs--;
+
+ while (num_devs >= num_devs_tmp) {
+ sprintf(name_tmp, "%s%i", "name_", num_devs);
+ dev2 = rte_bbdev_get_named_dev(name_tmp);
+ TEST_ASSERT_SUCCESS(rte_bbdev_release(dev2),
+ "Failed to uninitialize bbdev driver %s ",
+ name_tmp);
+ num_devs--;
+ }
+
+ TEST_ASSERT(dev1->data->dev_id < RTE_BBDEV_MAX_DEVS,
+ "Failed test rte_bbdev_allocate: "
+ "invalid dev_id %" PRIu8 ", max number of devices %d ",
+ dev1->data->dev_id, RTE_BBDEV_MAX_DEVS);
+
+ TEST_ASSERT(dev1->state == RTE_BBDEV_INITIALIZED,
+ "Failed test rte_bbdev_allocate: "
+ "invalid state %d (0 - RTE_BBDEV_UNUSED, 1 - RTE_BBDEV_INITIALIZED",
+ dev1->state);
+
+ TEST_ASSERT_FAIL(rte_bbdev_release(NULL),
+ "Failed to uninitialize bbdev driver with NULL bbdev");
+
+ sprintf(name_tmp, "%s", "invalid_name");
+ dev2 = rte_bbdev_get_named_dev(name_tmp);
+ TEST_ASSERT_FAIL(rte_bbdev_release(dev2),
+ "Failed to uninitialize bbdev driver with invalid name");
+
+ dev2 = rte_bbdev_get_named_dev(name);
+ TEST_ASSERT_SUCCESS(rte_bbdev_release(dev2),
+ "Failed to uninitialize bbdev driver: %s ", name);
+
+ return TEST_SUCCESS;
+}
+
+static void
+event_callback(uint16_t dev_id, enum rte_bbdev_event_type type, void *param,
+ void *ret_param)
+{
+ RTE_SET_USED(dev_id);
+ RTE_SET_USED(ret_param);
+
+ if (param == NULL)
+ return;
+
+ if (type == RTE_BBDEV_EVENT_UNKNOWN ||
+ type == RTE_BBDEV_EVENT_ERROR ||
+ type == RTE_BBDEV_EVENT_MAX)
+ *(int *)param = type;
+}
+
+static int
+test_bbdev_callback(void)
+{
+ struct rte_bbdev *dev1, *dev2;
+ const char *name = "dev_name1";
+ const char *name2 = "dev_name2";
+ int event_status;
+ uint8_t invalid_dev_id = RTE_BBDEV_MAX_DEVS;
+ enum rte_bbdev_event_type invalid_event_type = RTE_BBDEV_EVENT_MAX;
+ uint8_t dev_id;
+
+ dev1 = rte_bbdev_allocate(name);
+ TEST_ASSERT(dev1 != NULL, "Failed to initialize bbdev driver");
+
+ /*
+ * RTE_BBDEV_EVENT_UNKNOWN - unregistered
+ * RTE_BBDEV_EVENT_ERROR - unregistered
+ */
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == -1,
+ "Failed test for rte_bbdev_pmd_callback_process: "
+ "events were not registered ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_MAX, event_callback, NULL),
+ "Failed to callback register for RTE_BBDEV_EVENT_MAX ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_MAX, event_callback, NULL),
+ "Failed to unregister RTE_BBDEV_EVENT_MAX ");
+
+ /*
+ * RTE_BBDEV_EVENT_UNKNOWN - registered
+ * RTE_BBDEV_EVENT_ERROR - unregistered
+ */
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+ "Failed to callback rgstr for RTE_BBDEV_EVENT_UNKNOWN");
+
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process "
+ "for RTE_BBDEV_EVENT_UNKNOWN ");
+
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process: "
+ "event RTE_BBDEV_EVENT_ERROR was not registered ");
+
+ /*
+ * RTE_BBDEV_EVENT_UNKNOWN - registered
+ * RTE_BBDEV_EVENT_ERROR - registered
+ */
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed to callback rgstr for RTE_BBDEV_EVENT_ERROR ");
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed to callback register for RTE_BBDEV_EVENT_ERROR"
+ "(re-registration) ");
+
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process "
+ "for RTE_BBDEV_EVENT_UNKNOWN ");
+
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == 1,
+ "Failed test for rte_bbdev_pmd_callback_process "
+ "for RTE_BBDEV_EVENT_ERROR ");
+
+ /*
+ * RTE_BBDEV_EVENT_UNKNOWN - registered
+ * RTE_BBDEV_EVENT_ERROR - unregistered
+ */
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed to unregister RTE_BBDEV_EVENT_ERROR ");
+
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process "
+ "for RTE_BBDEV_EVENT_UNKNOWN ");
+
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process: "
+ "event RTE_BBDEV_EVENT_ERROR was unregistered ");
+
+ /* rte_bbdev_callback_register with invalid inputs */
+ TEST_ASSERT_FAIL(rte_bbdev_callback_register(invalid_dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed test for rte_bbdev_callback_register "
+ "for invalid_dev_id ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev1->data->dev_id,
+ invalid_event_type, event_callback, &event_status),
+ "Failed to callback register for invalid event type ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev1->data->dev_id,
+ RTE_BBDEV_EVENT_ERROR, NULL, &event_status),
+ "Failed to callback register - no callback function ");
+
+ /* The impact of devices on each other */
+ dev2 = rte_bbdev_allocate(name2);
+ TEST_ASSERT(dev2 != NULL,
+ "Failed to initialize bbdev driver");
+
+ /*
+ * dev2:
+ * RTE_BBDEV_EVENT_UNKNOWN - unregistered
+ * RTE_BBDEV_EVENT_ERROR - unregistered
+ */
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == -1,
+ "Failed test for rte_bbdev_pmd_callback_process: "
+ "events were not registered ");
+
+ /*
+ * dev1: RTE_BBDEV_EVENT_ERROR - unregistered
+ * dev2: RTE_BBDEV_EVENT_ERROR - registered
+ */
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev2->data->dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed to callback rgstr for RTE_BBDEV_EVENT_ERROR");
+
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == -1,
+ "Failed test for rte_bbdev_pmd_callback_process in dev1 "
+ "for RTE_BBDEV_EVENT_ERROR ");
+
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == 1,
+ "Failed test for rte_bbdev_pmd_callback_process in dev2 "
+ "for RTE_BBDEV_EVENT_ERROR ");
+
+ /*
+ * dev1: RTE_BBDEV_EVENT_UNKNOWN - registered
+ * dev2: RTE_BBDEV_EVENT_UNKNOWN - unregistered
+ */
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev2->data->dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+ "Failed to callback register for RTE_BBDEV_EVENT_UNKNOWN "
+ "in dev 2 ");
+
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process in dev2"
+ " for RTE_BBDEV_EVENT_UNKNOWN ");
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev2->data->dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+ "Failed to unregister RTE_BBDEV_EVENT_UNKNOWN ");
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev2->data->dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+ "Failed to unregister RTE_BBDEV_EVENT_UNKNOWN : "
+ "unregister function called once again ");
+
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ TEST_ASSERT(event_status == -1,
+ "Failed test for rte_bbdev_pmd_callback_process in dev2"
+ " for RTE_BBDEV_EVENT_UNKNOWN ");
+
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ TEST_ASSERT(event_status == 0,
+ "Failed test for rte_bbdev_pmd_callback_process in dev2 "
+ "for RTE_BBDEV_EVENT_UNKNOWN ");
+
+ /* rte_bbdev_pmd_callback_process with invalid inputs */
+ rte_bbdev_pmd_callback_process(NULL, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev1, invalid_event_type, NULL);
+ TEST_ASSERT(event_status == -1,
+ "Failed test for rte_bbdev_pmd_callback_process: "
+ "for invalid event type ");
+
+ /* rte_dev_callback_unregister with invalid inputs */
+ TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(invalid_dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, &event_status),
+ "Failed test for rte_dev_callback_unregister "
+ "for invalid_dev_id ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev1->data->dev_id,
+ invalid_event_type, event_callback, &event_status),
+ "Failed rte_dev_callback_unregister "
+ "for invalid event type ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev1->data->dev_id,
+ invalid_event_type, NULL, &event_status),
+ "Failed rte_dev_callback_unregister "
+ "when no callback function ");
+
+ dev_id = dev1->data->dev_id;
+
+ rte_bbdev_release(dev1);
+ rte_bbdev_release(dev2);
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_register(dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed test for rte_bbdev_callback_register: "
+ "function called after rte_bbdev_driver_uninit .");
+
+ TEST_ASSERT_FAIL(rte_bbdev_callback_unregister(dev_id,
+ RTE_BBDEV_EVENT_ERROR, event_callback, &event_status),
+ "Failed test for rte_dev_callback_unregister: "
+ "function called after rte_bbdev_driver_uninit. ");
+
+ event_status = -1;
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ rte_bbdev_pmd_callback_process(dev1, RTE_BBDEV_EVENT_ERROR, NULL);
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_ERROR, NULL);
+ TEST_ASSERT(event_status == -1,
+ "Failed test for rte_bbdev_pmd_callback_process: "
+ "callback function was called after rte_bbdev_driver_uninit");
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_invalid_driver(void)
+{
+ struct rte_bbdev dev1, *dev2;
+ uint8_t dev_id = null_dev_id;
+ uint16_t queue_id = 0;
+ struct rte_bbdev_stats stats;
+ struct bbdev_testsuite_params *ts_params = &testsuite_params;
+ struct rte_bbdev_queue_info qinfo;
+ struct rte_bbdev_ops dev_ops_tmp;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id), "Failed to stop bbdev %u ",
+ dev_id);
+
+ dev1 = rte_bbdev_devices[dev_id];
+ dev2 = &rte_bbdev_devices[dev_id];
+
+ /* Tests for rte_bbdev_setup_queues */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 1, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "NULL dev_ops structure ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev_ops_tmp = *dev2->dev_ops;
+ dev_ops_tmp.info_get = NULL;
+ dev2->dev_ops = &dev_ops_tmp;
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 1, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "NULL info_get ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev_ops_tmp = *dev2->dev_ops;
+ dev_ops_tmp.queue_release = NULL;
+ dev2->dev_ops = &dev_ops_tmp;
+ TEST_ASSERT_FAIL(rte_bbdev_setup_queues(dev_id, 1, SOCKET_ID_ANY),
+ "Failed test for rte_bbdev_setup_queues: "
+ "NULL queue_release ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev2->data->socket_id = SOCKET_ID_ANY;
+ TEST_ASSERT_SUCCESS(rte_bbdev_setup_queues(dev_id, 1,
+ SOCKET_ID_ANY), "Failed to configure bbdev %u", dev_id);
+
+ /* Test for rte_bbdev_queue_configure */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed to configure queue %u on device %u "
+ "with NULL dev_ops structure ", queue_id, dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev_ops_tmp = *dev2->dev_ops;
+ dev_ops_tmp.queue_setup = NULL;
+ dev2->dev_ops = &dev_ops_tmp;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed to configure queue %u on device %u "
+ "with NULL queue_setup ", queue_id, dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev_ops_tmp = *dev2->dev_ops;
+ dev_ops_tmp.info_get = NULL;
+ dev2->dev_ops = &dev_ops_tmp;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed to configure queue %u on device %u "
+ "with NULL info_get ", queue_id, dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_configure(RTE_BBDEV_MAX_DEVS,
+ queue_id, &ts_params->qconf),
+ "Failed to configure queue %u on device %u ",
+ queue_id, dev_id);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id,
+ &ts_params->qconf),
+ "Failed to configure queue %u on device %u ",
+ queue_id, dev_id);
+
+ /* Test for rte_bbdev_queue_info_get */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_info_get(dev_id, queue_id, &qinfo),
+ "Failed test for rte_bbdev_info_get: "
+ "NULL dev_ops structure ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_info_get(RTE_BBDEV_MAX_DEVS,
+ queue_id, &qinfo),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid dev_id ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_info_get(dev_id,
+ RTE_MAX_QUEUES_PER_PORT, &qinfo),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid queue_id ");
+
+ TEST_ASSERT_FAIL(rte_bbdev_queue_info_get(dev_id, queue_id, NULL),
+ "Failed test for rte_bbdev_info_get: "
+ "invalid dev_info ");
+
+ /* Test for rte_bbdev_start */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_start(dev_id),
+ "Failed to start bbdev %u "
+ "with NULL dev_ops structure ", dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+ "Failed to start bbdev %u ", dev_id);
+
+ /* Test for rte_bbdev_queue_start */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_start(dev_id, queue_id),
+ "Failed to start queue %u on device %u: "
+ "NULL dev_ops structure", queue_id, dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_start(dev_id, queue_id),
+ "Failed to start queue %u on device %u ", queue_id,
+ dev_id);
+
+ /* Tests for rte_bbdev_stats_get */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_stats_get(dev_id, &stats),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev_ops_tmp = *dev2->dev_ops;
+ dev_ops_tmp.stats_reset = NULL;
+ dev2->dev_ops = &dev_ops_tmp;
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+ "Failed test for rte_bbdev_stats_get: "
+ "NULL stats_get ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_get(dev_id, &stats),
+ "Failed test for rte_bbdev_stats_get on device %u ",
+ dev_id);
+
+ /*
+ * Tests for:
+ * rte_bbdev_callback_register,
+ * rte_bbdev_pmd_callback_process,
+ * rte_dev_callback_unregister
+ */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_register(dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, NULL),
+ "Failed to callback rgstr for RTE_BBDEV_EVENT_UNKNOWN");
+ rte_bbdev_pmd_callback_process(dev2, RTE_BBDEV_EVENT_UNKNOWN, NULL);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_callback_unregister(dev_id,
+ RTE_BBDEV_EVENT_UNKNOWN, event_callback, NULL),
+ "Failed to unregister RTE_BBDEV_EVENT_ERROR ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ /* Tests for rte_bbdev_stats_reset */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_stats_reset(dev_id),
+ "Failed to reset statistic for device %u ", dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ dev_ops_tmp = *dev2->dev_ops;
+ dev_ops_tmp.stats_reset = NULL;
+ dev2->dev_ops = &dev_ops_tmp;
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+ "Failed test for rte_bbdev_stats_reset: "
+ "NULL stats_reset ");
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+ "Failed to reset statistic for device %u ", dev_id);
+
+ /* Tests for rte_bbdev_queue_stop */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_queue_stop(dev_id, queue_id),
+ "Failed to stop queue %u on device %u: "
+ "NULL dev_ops structure", queue_id, dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_stop(dev_id, queue_id),
+ "Failed to stop queue %u on device %u ", queue_id,
+ dev_id);
+
+ /* Tests for rte_bbdev_stop */
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_stop(dev_id),
+ "Failed to stop bbdev %u with NULL dev_ops structure ",
+ dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_stop(dev_id),
+ "Failed to stop bbdev %u ", dev_id);
+
+ /* Tests for rte_bbdev_close */
+ TEST_ASSERT_FAIL(rte_bbdev_close(RTE_BBDEV_MAX_DEVS),
+ "Failed to close bbdev with invalid dev_id");
+
+ dev2->dev_ops = NULL;
+ TEST_ASSERT_FAIL(rte_bbdev_close(dev_id),
+ "Failed to close bbdev %u with NULL dev_ops structure ",
+ dev_id);
+ dev2->dev_ops = dev1.dev_ops;
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_close(dev_id),
+ "Failed to close bbdev %u ", dev_id);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_bbdev_get_named_dev(void)
+{
+ struct rte_bbdev *dev, *dev_tmp;
+ const char *name = "name";
+
+ dev = rte_bbdev_allocate(name);
+ TEST_ASSERT(dev != NULL, "Failed to initialize bbdev driver");
+
+ dev_tmp = rte_bbdev_get_named_dev(NULL);
+ TEST_ASSERT(dev_tmp == NULL, "Failed test for rte_bbdev_get_named_dev: "
+ "function called with NULL parameter");
+
+ dev_tmp = rte_bbdev_get_named_dev(name);
+
+ TEST_ASSERT(dev == dev_tmp, "Failed test for rte_bbdev_get_named_dev: "
+ "wrong device was returned ");
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_release(dev),
+ "Failed to uninitialize bbdev driver %s ", name);
+
+ return TEST_SUCCESS;
+}
+
+static struct unit_test_suite bbdev_null_testsuite = {
+ .suite_name = "BBDEV NULL Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+
+ TEST_CASE(test_bbdev_configure_invalid_dev_id),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_configure_invalid_num_queues),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_configure_stop_device),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_configure_stop_queue),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_configure_invalid_queue_configure),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_op_pool),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_op_type),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_op_pool_size),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_stats),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_driver_init),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_callback),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_invalid_driver),
+
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_bbdev_get_named_dev),
+
+ TEST_CASE(test_bbdev_count),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+REGISTER_TEST_COMMAND(unittest, bbdev_null_testsuite);
diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
new file mode 100644
index 0000000..f7b51ca
--- /dev/null
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -0,0 +1,2136 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_eal.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_launch.h>
+#include <rte_bbdev.h>
+#include <rte_cycles.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_hexdump.h>
+
+#include "main.h"
+#include "test_bbdev_vector.h"
+
+#define GET_SOCKET(socket_id) (((socket_id) == SOCKET_ID_ANY) ? 0 : (socket_id))
+
+#define MAX_QUEUES RTE_MAX_LCORE
+
+#define OPS_CACHE_SIZE 256U
+#define OPS_POOL_SIZE_MIN 511U /* 0.5K per queue */
+
+#define SYNC_WAIT 0
+#define SYNC_START 1
+
+#define INVALID_QUEUE_ID -1
+
+static struct test_bbdev_vector test_vector;
+
+/* Switch between PMD and Interrupt for throughput TC */
+static bool intr_enabled;
+
+/* Represents tested active devices */
+static struct active_device {
+ const char *driver_name;
+ uint8_t dev_id;
+ uint16_t supported_ops;
+ uint16_t queue_ids[MAX_QUEUES];
+ uint16_t nb_queues;
+ struct rte_mempool *ops_mempool;
+ struct rte_mempool *in_mbuf_pool;
+ struct rte_mempool *hard_out_mbuf_pool;
+ struct rte_mempool *soft_out_mbuf_pool;
+} active_devs[RTE_BBDEV_MAX_DEVS];
+
+static uint8_t nb_active_devs;
+
+/* Data buffers used by BBDEV ops */
+struct test_buffers {
+ struct rte_bbdev_op_data *inputs;
+ struct rte_bbdev_op_data *hard_outputs;
+ struct rte_bbdev_op_data *soft_outputs;
+};
+
+/* Operation parameters specific for given test case */
+struct test_op_params {
+ struct rte_mempool *mp;
+ struct rte_bbdev_dec_op *ref_dec_op;
+ struct rte_bbdev_enc_op *ref_enc_op;
+ uint16_t burst_sz;
+ uint16_t num_to_process;
+ uint16_t num_lcores;
+ int vector_mask;
+ rte_atomic16_t sync;
+ struct test_buffers q_bufs[RTE_MAX_NUMA_NODES][MAX_QUEUES];
+};
+
+/* Contains per lcore params */
+struct thread_params {
+ uint8_t dev_id;
+ uint16_t queue_id;
+ uint64_t start_time;
+ double mops;
+ double mbps;
+ rte_atomic16_t nb_dequeued;
+ rte_atomic16_t processing_status;
+ struct test_op_params *op_params;
+};
+
+typedef int (test_case_function)(struct active_device *ad,
+ struct test_op_params *op_params);
+
+static inline void
+set_avail_op(struct active_device *ad, enum rte_bbdev_op_type op_type)
+{
+ ad->supported_ops |= (1 << op_type);
+}
+
+static inline bool
+is_avail_op(struct active_device *ad, enum rte_bbdev_op_type op_type)
+{
+ return ad->supported_ops & (1 << op_type);
+}
+
+static inline bool
+flags_match(uint32_t flags_req, uint32_t flags_present)
+{
+ return (flags_req & flags_present) == flags_req;
+}
+
+static void
+clear_soft_out_cap(uint32_t *op_flags)
+{
+ *op_flags &= ~RTE_BBDEV_TURBO_SOFT_OUTPUT;
+ *op_flags &= ~RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT;
+ *op_flags &= ~RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT;
+}
+
+static int
+check_dev_cap(const struct rte_bbdev_info *dev_info)
+{
+ unsigned int i;
+ unsigned int nb_inputs, nb_soft_outputs, nb_hard_outputs;
+ const struct rte_bbdev_op_cap *op_cap = dev_info->drv.capabilities;
+
+ nb_inputs = test_vector.entries[DATA_INPUT].nb_segments;
+ nb_soft_outputs = test_vector.entries[DATA_SOFT_OUTPUT].nb_segments;
+ nb_hard_outputs = test_vector.entries[DATA_HARD_OUTPUT].nb_segments;
+
+ for (i = 0; op_cap->type != RTE_BBDEV_OP_NONE; ++i, ++op_cap) {
+ if (op_cap->type != test_vector.op_type)
+ continue;
+
+ if (op_cap->type == RTE_BBDEV_OP_TURBO_DEC) {
+ const struct rte_bbdev_op_cap_turbo_dec *cap =
+ &op_cap->cap.turbo_dec;
+ /* Ignore lack of soft output capability, just skip
+ * checking if soft output is valid.
+ */
+ if ((test_vector.turbo_dec.op_flags &
+ RTE_BBDEV_TURBO_SOFT_OUTPUT) &&
+ !(cap->capability_flags &
+ RTE_BBDEV_TURBO_SOFT_OUTPUT)) {
+ printf(
+ "WARNING: Device \"%s\" does not support soft output - soft output flags will be ignored.\n",
+ dev_info->dev_name);
+ clear_soft_out_cap(
+ &test_vector.turbo_dec.op_flags);
+ }
+
+ if (!flags_match(test_vector.turbo_dec.op_flags,
+ cap->capability_flags))
+ return TEST_FAILED;
+ if (nb_inputs > cap->num_buffers_src) {
+ printf("Too many inputs defined: %u, max: %u\n",
+ nb_inputs, cap->num_buffers_src);
+ return TEST_FAILED;
+ }
+ if (nb_soft_outputs > cap->num_buffers_soft_out &&
+ (test_vector.turbo_dec.op_flags &
+ RTE_BBDEV_TURBO_SOFT_OUTPUT)) {
+ printf(
+ "Too many soft outputs defined: %u, max: %u\n",
+ nb_soft_outputs,
+ cap->num_buffers_soft_out);
+ return TEST_FAILED;
+ }
+ if (nb_hard_outputs > cap->num_buffers_hard_out) {
+ printf(
+ "Too many hard outputs defined: %u, max: %u\n",
+ nb_hard_outputs,
+ cap->num_buffers_hard_out);
+ return TEST_FAILED;
+ }
+ if (intr_enabled && !(cap->capability_flags &
+ RTE_BBDEV_TURBO_DEC_INTERRUPTS)) {
+ printf(
+ "Dequeue interrupts are not supported!\n");
+ return TEST_FAILED;
+ }
+
+ return TEST_SUCCESS;
+ } else if (op_cap->type == RTE_BBDEV_OP_TURBO_ENC) {
+ const struct rte_bbdev_op_cap_turbo_enc *cap =
+ &op_cap->cap.turbo_enc;
+
+ if (!flags_match(test_vector.turbo_enc.op_flags,
+ cap->capability_flags))
+ return TEST_FAILED;
+ if (nb_inputs > cap->num_buffers_src) {
+ printf("Too many inputs defined: %u, max: %u\n",
+ nb_inputs, cap->num_buffers_src);
+ return TEST_FAILED;
+ }
+ if (nb_hard_outputs > cap->num_buffers_dst) {
+ printf(
+ "Too many hard outputs defined: %u, max: %u\n",
+ nb_hard_outputs, cap->num_buffers_src);
+ return TEST_FAILED;
+ }
+ if (intr_enabled && !(cap->capability_flags &
+ RTE_BBDEV_TURBO_ENC_INTERRUPTS)) {
+ printf(
+ "Dequeue interrupts are not supported!\n");
+ return TEST_FAILED;
+ }
+
+ return TEST_SUCCESS;
+ }
+ }
+
+ if ((i == 0) && (test_vector.op_type == RTE_BBDEV_OP_NONE))
+ return TEST_SUCCESS; /* Special case for NULL device */
+
+ return TEST_FAILED;
+}
+
+/* calculates optimal mempool size not smaller than the val */
+static unsigned int
+optimal_mempool_size(unsigned int val)
+{
+ return rte_align32pow2(val + 1) - 1;
+}
+
+/* allocates mbuf mempool for inputs and outputs */
+static struct rte_mempool *
+create_mbuf_pool(struct op_data_entries *entries, uint8_t dev_id,
+ int socket_id, unsigned int mbuf_pool_size,
+ const char *op_type_str)
+{
+ unsigned int i;
+ uint32_t max_seg_sz = 0;
+ char pool_name[RTE_MEMPOOL_NAMESIZE];
+
+ /* find max input segment size */
+ for (i = 0; i < entries->nb_segments; ++i)
+ if (entries->segments[i].length > max_seg_sz)
+ max_seg_sz = entries->segments[i].length;
+
+ snprintf(pool_name, sizeof(pool_name), "%s_pool_%u", op_type_str,
+ dev_id);
+ return rte_pktmbuf_pool_create(pool_name, mbuf_pool_size, 0, 0,
+ RTE_MAX(max_seg_sz + RTE_PKTMBUF_HEADROOM,
+ (unsigned int)RTE_MBUF_DEFAULT_BUF_SIZE), socket_id);
+}
+
+static int
+create_mempools(struct active_device *ad, int socket_id,
+ enum rte_bbdev_op_type op_type, uint16_t num_ops)
+{
+ struct rte_mempool *mp;
+ unsigned int ops_pool_size, mbuf_pool_size = 0;
+ char pool_name[RTE_MEMPOOL_NAMESIZE];
+ const char *op_type_str;
+
+ struct op_data_entries *in = &test_vector.entries[DATA_INPUT];
+ struct op_data_entries *hard_out =
+ &test_vector.entries[DATA_HARD_OUTPUT];
+ struct op_data_entries *soft_out =
+ &test_vector.entries[DATA_SOFT_OUTPUT];
+
+ /* allocate ops mempool */
+ ops_pool_size = optimal_mempool_size(RTE_MAX(
+ /* Ops used plus 1 reference op */
+ RTE_MAX((unsigned int)(ad->nb_queues * num_ops + 1),
+ /* Minimal cache size plus 1 reference op */
+ (unsigned int)(1.5 * rte_lcore_count() *
+ OPS_CACHE_SIZE + 1)),
+ OPS_POOL_SIZE_MIN));
+
+ op_type_str = rte_bbdev_op_type_str(op_type);
+ TEST_ASSERT_NOT_NULL(op_type_str, "Invalid op type: %u", op_type);
+
+ snprintf(pool_name, sizeof(pool_name), "%s_pool_%u", op_type_str,
+ ad->dev_id);
+ mp = rte_bbdev_op_pool_create(pool_name, op_type,
+ ops_pool_size, OPS_CACHE_SIZE, socket_id);
+ TEST_ASSERT_NOT_NULL(mp,
+ "ERROR Failed to create %u items ops pool for dev %u on socket %u.",
+ ops_pool_size,
+ ad->dev_id,
+ socket_id);
+ ad->ops_mempool = mp;
+
+ /* Inputs */
+ mbuf_pool_size = optimal_mempool_size(ops_pool_size * in->nb_segments);
+ mp = create_mbuf_pool(in, ad->dev_id, socket_id, mbuf_pool_size, "in");
+ TEST_ASSERT_NOT_NULL(mp,
+ "ERROR Failed to create %u items input pktmbuf pool for dev %u on socket %u.",
+ mbuf_pool_size,
+ ad->dev_id,
+ socket_id);
+ ad->in_mbuf_pool = mp;
+
+ /* Hard outputs */
+ mbuf_pool_size = optimal_mempool_size(ops_pool_size *
+ hard_out->nb_segments);
+ mp = create_mbuf_pool(hard_out, ad->dev_id, socket_id, mbuf_pool_size,
+ "hard_out");
+ TEST_ASSERT_NOT_NULL(mp,
+ "ERROR Failed to create %u items hard output pktmbuf pool for dev %u on socket %u.",
+ mbuf_pool_size,
+ ad->dev_id,
+ socket_id);
+ ad->hard_out_mbuf_pool = mp;
+
+ if (soft_out->nb_segments == 0)
+ return TEST_SUCCESS;
+
+ /* Soft outputs */
+ mbuf_pool_size = optimal_mempool_size(ops_pool_size *
+ soft_out->nb_segments);
+ mp = create_mbuf_pool(soft_out, ad->dev_id, socket_id, mbuf_pool_size,
+ "soft_out");
+ TEST_ASSERT_NOT_NULL(mp,
+ "ERROR Failed to create %uB soft output pktmbuf pool for dev %u on socket %u.",
+ mbuf_pool_size,
+ ad->dev_id,
+ socket_id);
+ ad->soft_out_mbuf_pool = mp;
+
+ return 0;
+}
+
+static int
+add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
+ struct test_bbdev_vector *vector)
+{
+ int ret;
+ unsigned int queue_id;
+ struct rte_bbdev_queue_conf qconf;
+ struct active_device *ad = &active_devs[nb_active_devs];
+ unsigned int nb_queues;
+ enum rte_bbdev_op_type op_type = vector->op_type;
+
+ nb_queues = RTE_MIN(rte_lcore_count(), info->drv.max_num_queues);
+ /* setup device */
+ ret = rte_bbdev_setup_queues(dev_id, nb_queues, info->socket_id);
+ if (ret < 0) {
+ printf("rte_bbdev_setup_queues(%u, %u, %d) ret %i\n",
+ dev_id, nb_queues, info->socket_id, ret);
+ return TEST_FAILED;
+ }
+
+ /* configure interrupts if needed */
+ if (intr_enabled) {
+ ret = rte_bbdev_intr_enable(dev_id);
+ if (ret < 0) {
+ printf("rte_bbdev_intr_enable(%u) ret %i\n", dev_id,
+ ret);
+ return TEST_FAILED;
+ }
+ }
+
+ /* setup device queues */
+ qconf.socket = info->socket_id;
+ qconf.queue_size = info->drv.default_queue_conf.queue_size;
+ qconf.priority = 0;
+ qconf.deferred_start = 0;
+ qconf.op_type = op_type;
+
+ for (queue_id = 0; queue_id < nb_queues; ++queue_id) {
+ ret = rte_bbdev_queue_configure(dev_id, queue_id, &qconf);
+ if (ret != 0) {
+ printf(
+ "Allocated all queues (id=%u) at prio%u on dev%u\n",
+ queue_id, qconf.priority, dev_id);
+ qconf.priority++;
+ ret = rte_bbdev_queue_configure(ad->dev_id, queue_id,
+ &qconf);
+ }
+ if (ret != 0) {
+ printf("All queues on dev %u allocated: %u\n",
+ dev_id, queue_id);
+ break;
+ }
+ ad->queue_ids[queue_id] = queue_id;
+ }
+ TEST_ASSERT(queue_id != 0,
+ "ERROR Failed to configure any queues on dev %u",
+ dev_id);
+ ad->nb_queues = queue_id;
+
+ set_avail_op(ad, op_type);
+
+ return TEST_SUCCESS;
+}
+
+static int
+add_active_device(uint8_t dev_id, struct rte_bbdev_info *info,
+ struct test_bbdev_vector *vector)
+{
+ int ret;
+
+ active_devs[nb_active_devs].driver_name = info->drv.driver_name;
+ active_devs[nb_active_devs].dev_id = dev_id;
+
+ ret = add_bbdev_dev(dev_id, info, vector);
+ if (ret == TEST_SUCCESS)
+ ++nb_active_devs;
+ return ret;
+}
+
+static uint8_t
+populate_active_devices(void)
+{
+ int ret;
+ uint8_t dev_id;
+ uint8_t nb_devs_added = 0;
+ struct rte_bbdev_info info;
+
+ RTE_BBDEV_FOREACH(dev_id) {
+ rte_bbdev_info_get(dev_id, &info);
+
+ if (check_dev_cap(&info)) {
+ printf(
+ "Device %d (%s) does not support specified capabilities\n",
+ dev_id, info.dev_name);
+ continue;
+ }
+
+ ret = add_active_device(dev_id, &info, &test_vector);
+ if (ret != 0) {
+ printf("Adding active bbdev %s skipped\n",
+ info.dev_name);
+ continue;
+ }
+ nb_devs_added++;
+ }
+
+ return nb_devs_added;
+}
+
+static int
+read_test_vector(void)
+{
+ int ret;
+
+ memset(&test_vector, 0, sizeof(test_vector));
+ printf("Test vector file = %s\n", get_vector_filename());
+ ret = test_bbdev_vector_read(get_vector_filename(), &test_vector);
+ TEST_ASSERT_SUCCESS(ret, "Failed to parse file %s\n",
+ get_vector_filename());
+
+ return TEST_SUCCESS;
+}
+
+static int
+testsuite_setup(void)
+{
+ TEST_ASSERT_SUCCESS(read_test_vector(), "Test suite setup failed\n");
+
+ if (populate_active_devices() == 0) {
+ printf("No suitable devices found!\n");
+ return TEST_SKIPPED;
+ }
+
+ return TEST_SUCCESS;
+}
+
+static int
+interrupt_testsuite_setup(void)
+{
+ TEST_ASSERT_SUCCESS(read_test_vector(), "Test suite setup failed\n");
+
+ /* Enable interrupts */
+ intr_enabled = true;
+
+ /* Special case for NULL device (RTE_BBDEV_OP_NONE) */
+ if (populate_active_devices() == 0 ||
+ test_vector.op_type == RTE_BBDEV_OP_NONE) {
+ intr_enabled = false;
+ printf("No suitable devices found!\n");
+ return TEST_SKIPPED;
+ }
+
+ return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+ uint8_t dev_id;
+
+ /* Unconfigure devices */
+ RTE_BBDEV_FOREACH(dev_id)
+ rte_bbdev_close(dev_id);
+
+ /* Clear active devices structs. */
+ memset(active_devs, 0, sizeof(active_devs));
+ nb_active_devs = 0;
+}
+
+static int
+ut_setup(void)
+{
+ uint8_t i, dev_id;
+
+ for (i = 0; i < nb_active_devs; i++) {
+ dev_id = active_devs[i].dev_id;
+ /* reset bbdev stats */
+ TEST_ASSERT_SUCCESS(rte_bbdev_stats_reset(dev_id),
+ "Failed to reset stats of bbdev %u", dev_id);
+ /* start the device */
+ TEST_ASSERT_SUCCESS(rte_bbdev_start(dev_id),
+ "Failed to start bbdev %u", dev_id);
+ }
+
+ return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+ uint8_t i, dev_id;
+ struct rte_bbdev_stats stats;
+
+ for (i = 0; i < nb_active_devs; i++) {
+ dev_id = active_devs[i].dev_id;
+ /* read stats and print */
+ rte_bbdev_stats_get(dev_id, &stats);
+ /* Stop the device */
+ rte_bbdev_stop(dev_id);
+ }
+}
+
+static int
+init_op_data_objs(struct rte_bbdev_op_data *bufs,
+ struct op_data_entries *ref_entries,
+ struct rte_mempool *mbuf_pool, const uint16_t n,
+ enum op_data_type op_type, uint16_t min_alignment)
+{
+ int ret;
+ unsigned int i, j;
+
+ for (i = 0; i < n; ++i) {
+ char *data;
+ struct op_data_buf *seg = &ref_entries->segments[0];
+ struct rte_mbuf *m_head = rte_pktmbuf_alloc(mbuf_pool);
+ TEST_ASSERT_NOT_NULL(m_head,
+ "Not enough mbufs in %d data type mbuf pool (needed %u, available %u)",
+ op_type, n * ref_entries->nb_segments,
+ mbuf_pool->size);
+
+ bufs[i].data = m_head;
+ bufs[i].offset = 0;
+ bufs[i].length = 0;
+
+ if (op_type == DATA_INPUT) {
+ data = rte_pktmbuf_append(m_head, seg->length);
+ TEST_ASSERT_NOT_NULL(data,
+ "Couldn't append %u bytes to mbuf from %d data type mbuf pool",
+ seg->length, op_type);
+
+ TEST_ASSERT(data == RTE_PTR_ALIGN(data, min_alignment),
+ "Data addr in mbuf (%p) is not aligned to device min alignment (%u)",
+ data, min_alignment);
+ rte_memcpy(data, seg->addr, seg->length);
+ bufs[i].length += seg->length;
+
+
+ for (j = 1; j < ref_entries->nb_segments; ++j) {
+ struct rte_mbuf *m_tail =
+ rte_pktmbuf_alloc(mbuf_pool);
+ TEST_ASSERT_NOT_NULL(m_tail,
+ "Not enough mbufs in %d data type mbuf pool (needed %u, available %u)",
+ op_type,
+ n * ref_entries->nb_segments,
+ mbuf_pool->size);
+ seg += 1;
+
+ data = rte_pktmbuf_append(m_tail, seg->length);
+ TEST_ASSERT_NOT_NULL(data,
+ "Couldn't append %u bytes to mbuf from %d data type mbuf pool",
+ seg->length, op_type);
+
+ TEST_ASSERT(data == RTE_PTR_ALIGN(data,
+ min_alignment),
+ "Data addr in mbuf (%p) is not aligned to device min alignment (%u)",
+ data, min_alignment);
+ rte_memcpy(data, seg->addr, seg->length);
+ bufs[i].length += seg->length;
+
+ ret = rte_pktmbuf_chain(m_head, m_tail);
+ TEST_ASSERT_SUCCESS(ret,
+ "Couldn't chain mbufs from %d data type mbuf pool",
+ op_type);
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int
+allocate_buffers_on_socket(struct rte_bbdev_op_data **buffers, const int len,
+ const int socket)
+{
+ int i;
+
+ *buffers = rte_zmalloc_socket(NULL, len, 0, socket);
+ if (*buffers == NULL) {
+ printf("WARNING: Failed to allocate op_data on socket %d\n",
+ socket);
+ /* try to allocate memory on other detected sockets */
+ for (i = 0; i < socket; i++) {
+ *buffers = rte_zmalloc_socket(NULL, len, 0, i);
+ if (*buffers != NULL)
+ break;
+ }
+ }
+
+ return (*buffers == NULL) ? TEST_FAILED : TEST_SUCCESS;
+}
+
+static int
+fill_queue_buffers(struct test_op_params *op_params,
+ struct rte_mempool *in_mp, struct rte_mempool *hard_out_mp,
+ struct rte_mempool *soft_out_mp, uint16_t queue_id,
+ uint16_t min_alignment, const int socket_id)
+{
+ int ret;
+ enum op_data_type type;
+ const uint16_t n = op_params->num_to_process;
+
+ struct rte_mempool *mbuf_pools[DATA_NUM_TYPES] = {
+ in_mp,
+ soft_out_mp,
+ hard_out_mp,
+ };
+
+ struct rte_bbdev_op_data **queue_ops[DATA_NUM_TYPES] = {
+ &op_params->q_bufs[socket_id][queue_id].inputs,
+ &op_params->q_bufs[socket_id][queue_id].soft_outputs,
+ &op_params->q_bufs[socket_id][queue_id].hard_outputs,
+ };
+
+ for (type = DATA_INPUT; type < DATA_NUM_TYPES; ++type) {
+ struct op_data_entries *ref_entries =
+ &test_vector.entries[type];
+ if (ref_entries->nb_segments == 0)
+ continue;
+
+ ret = allocate_buffers_on_socket(queue_ops[type],
+ n * sizeof(struct rte_bbdev_op_data),
+ socket_id);
+ TEST_ASSERT_SUCCESS(ret,
+ "Couldn't allocate memory for rte_bbdev_op_data structs");
+
+ ret = init_op_data_objs(*queue_ops[type], ref_entries,
+ mbuf_pools[type], n, type, min_alignment);
+ TEST_ASSERT_SUCCESS(ret,
+ "Couldn't init rte_bbdev_op_data structs");
+ }
+
+ return 0;
+}
+
+static void
+free_buffers(struct active_device *ad, struct test_op_params *op_params)
+{
+ unsigned int i, j;
+
+ rte_mempool_free(ad->ops_mempool);
+ rte_mempool_free(ad->in_mbuf_pool);
+ rte_mempool_free(ad->hard_out_mbuf_pool);
+ rte_mempool_free(ad->soft_out_mbuf_pool);
+
+ for (i = 0; i < rte_lcore_count(); ++i) {
+ for (j = 0; j < RTE_MAX_NUMA_NODES; ++j) {
+ rte_free(op_params->q_bufs[j][i].inputs);
+ rte_free(op_params->q_bufs[j][i].hard_outputs);
+ rte_free(op_params->q_bufs[j][i].soft_outputs);
+ }
+ }
+}
+
+static void
+copy_reference_dec_op(struct rte_bbdev_dec_op **ops, unsigned int n,
+ unsigned int start_idx,
+ struct rte_bbdev_op_data *inputs,
+ struct rte_bbdev_op_data *hard_outputs,
+ struct rte_bbdev_op_data *soft_outputs,
+ struct rte_bbdev_dec_op *ref_op)
+{
+ unsigned int i;
+ struct rte_bbdev_op_turbo_dec *turbo_dec = &ref_op->turbo_dec;
+
+ for (i = 0; i < n; ++i) {
+ if (turbo_dec->code_block_mode == 0) {
+ ops[i]->turbo_dec.tb_params.ea =
+ turbo_dec->tb_params.ea;
+ ops[i]->turbo_dec.tb_params.eb =
+ turbo_dec->tb_params.eb;
+ ops[i]->turbo_dec.tb_params.k_pos =
+ turbo_dec->tb_params.k_pos;
+ ops[i]->turbo_dec.tb_params.k_neg =
+ turbo_dec->tb_params.k_neg;
+ ops[i]->turbo_dec.tb_params.c =
+ turbo_dec->tb_params.c;
+ ops[i]->turbo_dec.tb_params.c_neg =
+ turbo_dec->tb_params.c_neg;
+ ops[i]->turbo_dec.tb_params.cab =
+ turbo_dec->tb_params.cab;
+ } else {
+ ops[i]->turbo_dec.cb_params.e = turbo_dec->cb_params.e;
+ ops[i]->turbo_dec.cb_params.k = turbo_dec->cb_params.k;
+ }
+
+ ops[i]->turbo_dec.ext_scale = turbo_dec->ext_scale;
+ ops[i]->turbo_dec.iter_max = turbo_dec->iter_max;
+ ops[i]->turbo_dec.iter_min = turbo_dec->iter_min;
+ ops[i]->turbo_dec.op_flags = turbo_dec->op_flags;
+ ops[i]->turbo_dec.rv_index = turbo_dec->rv_index;
+ ops[i]->turbo_dec.num_maps = turbo_dec->num_maps;
+ ops[i]->turbo_dec.code_block_mode = turbo_dec->code_block_mode;
+
+ ops[i]->turbo_dec.hard_output = hard_outputs[start_idx + i];
+ ops[i]->turbo_dec.input = inputs[start_idx + i];
+ if (soft_outputs != NULL)
+ ops[i]->turbo_dec.soft_output =
+ soft_outputs[start_idx + i];
+ }
+}
+
+static void
+copy_reference_enc_op(struct rte_bbdev_enc_op **ops, unsigned int n,
+ unsigned int start_idx,
+ struct rte_bbdev_op_data *inputs,
+ struct rte_bbdev_op_data *outputs,
+ struct rte_bbdev_enc_op *ref_op)
+{
+ unsigned int i;
+ struct rte_bbdev_op_turbo_enc *turbo_enc = &ref_op->turbo_enc;
+ for (i = 0; i < n; ++i) {
+ if (turbo_enc->code_block_mode == 0) {
+ ops[i]->turbo_enc.tb_params.ea =
+ turbo_enc->tb_params.ea;
+ ops[i]->turbo_enc.tb_params.eb =
+ turbo_enc->tb_params.eb;
+ ops[i]->turbo_enc.tb_params.k_pos =
+ turbo_enc->tb_params.k_pos;
+ ops[i]->turbo_enc.tb_params.k_neg =
+ turbo_enc->tb_params.k_neg;
+ ops[i]->turbo_enc.tb_params.c =
+ turbo_enc->tb_params.c;
+ ops[i]->turbo_enc.tb_params.c_neg =
+ turbo_enc->tb_params.c_neg;
+ ops[i]->turbo_enc.tb_params.cab =
+ turbo_enc->tb_params.cab;
+ ops[i]->turbo_enc.tb_params.ncb_pos =
+ turbo_enc->tb_params.ncb_pos;
+ ops[i]->turbo_enc.tb_params.ncb_neg =
+ turbo_enc->tb_params.ncb_neg;
+ ops[i]->turbo_enc.tb_params.r = turbo_enc->tb_params.r;
+ } else {
+ ops[i]->turbo_enc.cb_params.e = turbo_enc->cb_params.e;
+ ops[i]->turbo_enc.cb_params.k = turbo_enc->cb_params.k;
+ ops[i]->turbo_enc.cb_params.ncb =
+ turbo_enc->cb_params.ncb;
+ }
+ ops[i]->turbo_enc.rv_index = turbo_enc->rv_index;
+ ops[i]->turbo_enc.op_flags = turbo_enc->op_flags;
+ ops[i]->turbo_enc.code_block_mode = turbo_enc->code_block_mode;
+
+ ops[i]->turbo_enc.output = outputs[start_idx + i];
+ ops[i]->turbo_enc.input = inputs[start_idx + i];
+ }
+}
+
+static int
+check_dec_status_and_ordering(struct rte_bbdev_dec_op *op,
+ unsigned int order_idx, const int expected_status)
+{
+ TEST_ASSERT(op->status == expected_status,
+ "op_status (%d) != expected_status (%d)",
+ op->status, expected_status);
+
+ TEST_ASSERT((void *)(uintptr_t)order_idx == op->opaque_data,
+ "Ordering error, expected %p, got %p",
+ (void *)(uintptr_t)order_idx, op->opaque_data);
+
+ return TEST_SUCCESS;
+}
+
+static int
+check_enc_status_and_ordering(struct rte_bbdev_enc_op *op,
+ unsigned int order_idx, const int expected_status)
+{
+ TEST_ASSERT(op->status == expected_status,
+ "op_status (%d) != expected_status (%d)",
+ op->status, expected_status);
+
+ TEST_ASSERT((void *)(uintptr_t)order_idx == op->opaque_data,
+ "Ordering error, expected %p, got %p",
+ (void *)(uintptr_t)order_idx, op->opaque_data);
+
+ return TEST_SUCCESS;
+}
+
+static inline int
+validate_op_chain(struct rte_bbdev_op_data *op,
+ struct op_data_entries *orig_op)
+{
+ uint8_t i;
+ struct rte_mbuf *m = op->data;
+ uint8_t nb_dst_segments = orig_op->nb_segments;
+
+ TEST_ASSERT(nb_dst_segments == m->nb_segs,
+ "Number of segments differ in original (%u) and filled (%u) op",
+ nb_dst_segments, m->nb_segs);
+
+ for (i = 0; i < nb_dst_segments; ++i) {
+ /* Apply offset to the first mbuf segment */
+ uint16_t offset = (i == 0) ? op->offset : 0;
+ uint16_t data_len = m->data_len - offset;
+
+ TEST_ASSERT(orig_op->segments[i].length == data_len,
+ "Length of segment differ in original (%u) and filled (%u) op",
+ orig_op->segments[i].length, data_len);
+ TEST_ASSERT_BUFFERS_ARE_EQUAL(orig_op->segments[i].addr,
+ rte_pktmbuf_mtod_offset(m, uint32_t *, offset),
+ data_len,
+ "Output buffers (CB=%u) are not equal", i);
+ m = m->next;
+ }
+
+ return TEST_SUCCESS;
+}
+
+static int
+validate_dec_buffers(struct rte_bbdev_dec_op *ref_op, struct test_buffers *bufs,
+ const uint16_t num_to_process)
+{
+ int i;
+
+ struct op_data_entries *hard_data_orig =
+ &test_vector.entries[DATA_HARD_OUTPUT];
+ struct op_data_entries *soft_data_orig =
+ &test_vector.entries[DATA_SOFT_OUTPUT];
+
+ for (i = 0; i < num_to_process; i++) {
+ TEST_ASSERT_SUCCESS(validate_op_chain(&bufs->hard_outputs[i],
+ hard_data_orig),
+ "Hard output buffers are not equal");
+ if (ref_op->turbo_dec.op_flags &
+ RTE_BBDEV_TURBO_SOFT_OUTPUT)
+ TEST_ASSERT_SUCCESS(validate_op_chain(
+ &bufs->soft_outputs[i],
+ soft_data_orig),
+ "Soft output buffers are not equal");
+ }
+
+ return TEST_SUCCESS;
+}
+
+static int
+validate_enc_buffers(struct test_buffers *bufs, const uint16_t num_to_process)
+{
+ int i;
+
+ struct op_data_entries *hard_data_orig =
+ &test_vector.entries[DATA_HARD_OUTPUT];
+
+ for (i = 0; i < num_to_process; i++)
+ TEST_ASSERT_SUCCESS(validate_op_chain(&bufs->hard_outputs[i],
+ hard_data_orig), "");
+
+ return TEST_SUCCESS;
+}
+
+static int
+validate_dec_op(struct rte_bbdev_dec_op **ops, const uint16_t n,
+ struct rte_bbdev_dec_op *ref_op, const int vector_mask)
+{
+ unsigned int i;
+ int ret;
+ struct op_data_entries *hard_data_orig =
+ &test_vector.entries[DATA_HARD_OUTPUT];
+ struct op_data_entries *soft_data_orig =
+ &test_vector.entries[DATA_SOFT_OUTPUT];
+ struct rte_bbdev_op_turbo_dec *ops_td;
+ struct rte_bbdev_op_data *hard_output;
+ struct rte_bbdev_op_data *soft_output;
+ struct rte_bbdev_op_turbo_dec *ref_td = &ref_op->turbo_dec;
+
+ for (i = 0; i < n; ++i) {
+ ops_td = &ops[i]->turbo_dec;
+ hard_output = &ops_td->hard_output;
+ soft_output = &ops_td->soft_output;
+
+ if (vector_mask & TEST_BBDEV_VF_EXPECTED_ITER_COUNT)
+ TEST_ASSERT(ops_td->iter_count <= ref_td->iter_count,
+ "Returned iter_count (%d) > expected iter_count (%d)",
+ ops_td->iter_count, ref_td->iter_count);
+ ret = check_dec_status_and_ordering(ops[i], i, ref_op->status);
+ TEST_ASSERT_SUCCESS(ret,
+ "Checking status and ordering for decoder failed");
+
+ TEST_ASSERT_SUCCESS(validate_op_chain(hard_output,
+ hard_data_orig),
+ "Hard output buffers (CB=%u) are not equal",
+ i);
+
+ if (ref_op->turbo_dec.op_flags & RTE_BBDEV_TURBO_SOFT_OUTPUT)
+ TEST_ASSERT_SUCCESS(validate_op_chain(soft_output,
+ soft_data_orig),
+ "Soft output buffers (CB=%u) are not equal",
+ i);
+ }
+
+ return TEST_SUCCESS;
+}
+
+static int
+validate_enc_op(struct rte_bbdev_enc_op **ops, const uint16_t n,
+ struct rte_bbdev_enc_op *ref_op)
+{
+ unsigned int i;
+ int ret;
+ struct op_data_entries *hard_data_orig =
+ &test_vector.entries[DATA_HARD_OUTPUT];
+
+ for (i = 0; i < n; ++i) {
+ ret = check_enc_status_and_ordering(ops[i], i, ref_op->status);
+ TEST_ASSERT_SUCCESS(ret,
+ "Checking status and ordering for encoder failed");
+ TEST_ASSERT_SUCCESS(validate_op_chain(
+ &ops[i]->turbo_enc.output,
+ hard_data_orig),
+ "Output buffers (CB=%u) are not equal",
+ i);
+ }
+
+ return TEST_SUCCESS;
+}
+
+static void
+create_reference_dec_op(struct rte_bbdev_dec_op *op)
+{
+ unsigned int i;
+ struct op_data_entries *entry;
+
+ op->turbo_dec = test_vector.turbo_dec;
+ entry = &test_vector.entries[DATA_INPUT];
+ for (i = 0; i < entry->nb_segments; ++i)
+ op->turbo_dec.input.length +=
+ entry->segments[i].length;
+}
+
+static void
+create_reference_enc_op(struct rte_bbdev_enc_op *op)
+{
+ unsigned int i;
+ struct op_data_entries *entry;
+
+ op->turbo_enc = test_vector.turbo_enc;
+ entry = &test_vector.entries[DATA_INPUT];
+ for (i = 0; i < entry->nb_segments; ++i)
+ op->turbo_enc.input.length +=
+ entry->segments[i].length;
+}
+
+static int
+init_test_op_params(struct test_op_params *op_params,
+ enum rte_bbdev_op_type op_type, const int expected_status,
+ const int vector_mask, struct rte_mempool *ops_mp,
+ uint16_t burst_sz, uint16_t num_to_process, uint16_t num_lcores)
+{
+ int ret = 0;
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+ ret = rte_bbdev_dec_op_alloc_bulk(ops_mp,
+ &op_params->ref_dec_op, 1);
+ else
+ ret = rte_bbdev_enc_op_alloc_bulk(ops_mp,
+ &op_params->ref_enc_op, 1);
+
+ TEST_ASSERT_SUCCESS(ret, "rte_bbdev_op_alloc_bulk() failed");
+
+ op_params->mp = ops_mp;
+ op_params->burst_sz = burst_sz;
+ op_params->num_to_process = num_to_process;
+ op_params->num_lcores = num_lcores;
+ op_params->vector_mask = vector_mask;
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+ op_params->ref_dec_op->status = expected_status;
+ else if (op_type == RTE_BBDEV_OP_TURBO_ENC)
+ op_params->ref_enc_op->status = expected_status;
+
+ return 0;
+}
+
+static int
+run_test_case_on_device(test_case_function *test_case_func, uint8_t dev_id,
+ struct test_op_params *op_params)
+{
+ int t_ret, f_ret, socket_id = SOCKET_ID_ANY;
+ unsigned int i;
+ struct active_device *ad;
+ unsigned int burst_sz = get_burst_sz();
+ enum rte_bbdev_op_type op_type = test_vector.op_type;
+
+ ad = &active_devs[dev_id];
+
+ /* Check if device supports op_type */
+ if (!is_avail_op(ad, test_vector.op_type))
+ return TEST_SUCCESS;
+
+ struct rte_bbdev_info info;
+ rte_bbdev_info_get(ad->dev_id, &info);
+ socket_id = GET_SOCKET(info.socket_id);
+
+ if (op_type == RTE_BBDEV_OP_NONE)
+ op_type = RTE_BBDEV_OP_TURBO_ENC;
+ f_ret = create_mempools(ad, socket_id, op_type,
+ get_num_ops());
+ if (f_ret != TEST_SUCCESS) {
+ printf("Couldn't create mempools");
+ goto fail;
+ }
+
+ f_ret = init_test_op_params(op_params, test_vector.op_type,
+ test_vector.expected_status,
+ test_vector.mask,
+ ad->ops_mempool,
+ burst_sz,
+ get_num_ops(),
+ get_num_lcores());
+ if (f_ret != TEST_SUCCESS) {
+ printf("Couldn't init test op params");
+ goto fail;
+ }
+
+ if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+ create_reference_dec_op(op_params->ref_dec_op);
+ else if (test_vector.op_type == RTE_BBDEV_OP_TURBO_ENC)
+ create_reference_enc_op(op_params->ref_enc_op);
+
+ for (i = 0; i < ad->nb_queues; ++i) {
+ f_ret = fill_queue_buffers(op_params,
+ ad->in_mbuf_pool,
+ ad->hard_out_mbuf_pool,
+ ad->soft_out_mbuf_pool,
+ ad->queue_ids[i],
+ info.drv.min_alignment,
+ socket_id);
+ if (f_ret != TEST_SUCCESS) {
+ printf("Couldn't init queue buffers");
+ goto fail;
+ }
+ }
+
+ /* Run test case function */
+ t_ret = test_case_func(ad, op_params);
+
+ /* Free active device resources and return */
+ free_buffers(ad, op_params);
+ return t_ret;
+
+fail:
+ free_buffers(ad, op_params);
+ return TEST_FAILED;
+}
+
+/* Run given test function per active device per supported op type
+ * per burst size.
+ */
+static int
+run_test_case(test_case_function *test_case_func)
+{
+ int ret = 0;
+ uint8_t dev;
+
+ /* Alloc op_params */
+ struct test_op_params *op_params = rte_zmalloc(NULL,
+ sizeof(struct test_op_params), RTE_CACHE_LINE_SIZE);
+ TEST_ASSERT_NOT_NULL(op_params, "Failed to alloc %zuB for op_params",
+ RTE_ALIGN(sizeof(struct test_op_params),
+ RTE_CACHE_LINE_SIZE));
+
+ /* For each device run test case function */
+ for (dev = 0; dev < nb_active_devs; ++dev)
+ ret |= run_test_case_on_device(test_case_func, dev, op_params);
+
+ rte_free(op_params);
+
+ return ret;
+}
+
+static void
+dequeue_event_callback(uint16_t dev_id,
+ enum rte_bbdev_event_type event, void *cb_arg,
+ void *ret_param)
+{
+ int ret;
+ uint16_t i;
+ uint64_t total_time;
+ uint16_t deq, burst_sz, num_to_process;
+ uint16_t queue_id = INVALID_QUEUE_ID;
+ struct rte_bbdev_dec_op *dec_ops[MAX_BURST];
+ struct rte_bbdev_enc_op *enc_ops[MAX_BURST];
+ struct test_buffers *bufs;
+ struct rte_bbdev_info info;
+
+ /* Input length in bytes, million operations per second,
+ * million bits per second.
+ */
+ double in_len;
+
+ struct thread_params *tp = cb_arg;
+
+ RTE_SET_USED(ret_param);
+ queue_id = tp->queue_id;
+
+ /* Find matching thread params using queue_id */
+ for (i = 0; i < MAX_QUEUES; ++i, ++tp)
+ if (tp->queue_id == queue_id)
+ break;
+
+ if (i == MAX_QUEUES) {
+ printf("%s: Queue_id from interrupt details was not found!\n",
+ __func__);
+ return;
+ }
+
+ if (unlikely(event != RTE_BBDEV_EVENT_DEQUEUE)) {
+ rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+ printf(
+ "Dequeue interrupt handler called for incorrect event!\n");
+ return;
+ }
+
+ burst_sz = tp->op_params->burst_sz;
+ num_to_process = tp->op_params->num_to_process;
+
+ if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+ deq = rte_bbdev_dequeue_dec_ops(dev_id, queue_id, dec_ops,
+ burst_sz);
+ else
+ deq = rte_bbdev_dequeue_enc_ops(dev_id, queue_id, enc_ops,
+ burst_sz);
+
+ if (deq < burst_sz) {
+ printf(
+ "After receiving the interrupt all operations should be dequeued. Expected: %u, got: %u\n",
+ burst_sz, deq);
+ rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+ return;
+ }
+
+ if (rte_atomic16_read(&tp->nb_dequeued) + deq < num_to_process) {
+ rte_atomic16_add(&tp->nb_dequeued, deq);
+ return;
+ }
+
+ total_time = rte_rdtsc_precise() - tp->start_time;
+
+ rte_bbdev_info_get(dev_id, &info);
+
+ bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ ret = TEST_SUCCESS;
+ if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+ ret = validate_dec_buffers(tp->op_params->ref_dec_op, bufs,
+ num_to_process);
+ else if (test_vector.op_type == RTE_BBDEV_OP_TURBO_ENC)
+ ret = validate_enc_buffers(bufs, num_to_process);
+
+ if (ret) {
+ printf("Buffers validation failed\n");
+ rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+ }
+
+ switch (test_vector.op_type) {
+ case RTE_BBDEV_OP_TURBO_DEC:
+ in_len = tp->op_params->ref_dec_op->turbo_dec.input.length;
+ break;
+ case RTE_BBDEV_OP_TURBO_ENC:
+ in_len = tp->op_params->ref_enc_op->turbo_enc.input.length;
+ break;
+ case RTE_BBDEV_OP_NONE:
+ in_len = 0.0;
+ break;
+ default:
+ printf("Unknown op type: %d\n", test_vector.op_type);
+ rte_atomic16_set(&tp->processing_status, TEST_FAILED);
+ return;
+ }
+
+ tp->mops = ((double)num_to_process / 1000000.0) /
+ ((double)total_time / (double)rte_get_tsc_hz());
+ tp->mbps = ((double)num_to_process * in_len * 8 / 1000000.0) /
+ ((double)total_time / (double)rte_get_tsc_hz());
+
+ rte_atomic16_add(&tp->nb_dequeued, deq);
+}
+
+static int
+throughput_intr_lcore_dec(void *arg)
+{
+ struct thread_params *tp = arg;
+ unsigned int enqueued;
+ struct rte_bbdev_dec_op *ops[MAX_BURST];
+ const uint16_t queue_id = tp->queue_id;
+ const uint16_t burst_sz = tp->op_params->burst_sz;
+ const uint16_t num_to_process = tp->op_params->num_to_process;
+ struct test_buffers *bufs = NULL;
+ unsigned int allocs_failed = 0;
+ struct rte_bbdev_info info;
+ int ret;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_intr_enable(tp->dev_id, queue_id),
+ "Failed to enable interrupts for dev: %u, queue_id: %u",
+ tp->dev_id, queue_id);
+
+ rte_bbdev_info_get(tp->dev_id, &info);
+ bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ rte_atomic16_clear(&tp->processing_status);
+ rte_atomic16_clear(&tp->nb_dequeued);
+
+ while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+ rte_pause();
+
+ tp->start_time = rte_rdtsc_precise();
+ for (enqueued = 0; enqueued < num_to_process;) {
+
+ uint16_t num_to_enq = burst_sz;
+
+ if (unlikely(num_to_process - enqueued < num_to_enq))
+ num_to_enq = num_to_process - enqueued;
+
+ ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops,
+ num_to_enq);
+ if (ret != 0) {
+ allocs_failed++;
+ continue;
+ }
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_dec_op(ops, num_to_enq, enqueued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ bufs->soft_outputs,
+ tp->op_params->ref_dec_op);
+
+ enqueued += rte_bbdev_enqueue_dec_ops(tp->dev_id, queue_id, ops,
+ num_to_enq);
+
+ rte_bbdev_dec_op_free_bulk(ops, num_to_enq);
+ }
+
+ if (allocs_failed > 0)
+ printf("WARNING: op allocations failed: %u times\n",
+ allocs_failed);
+
+ return TEST_SUCCESS;
+}
+
+static int
+throughput_intr_lcore_enc(void *arg)
+{
+ struct thread_params *tp = arg;
+ unsigned int enqueued;
+ struct rte_bbdev_enc_op *ops[MAX_BURST];
+ const uint16_t queue_id = tp->queue_id;
+ const uint16_t burst_sz = tp->op_params->burst_sz;
+ const uint16_t num_to_process = tp->op_params->num_to_process;
+ struct test_buffers *bufs = NULL;
+ unsigned int allocs_failed = 0;
+ struct rte_bbdev_info info;
+ int ret;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ TEST_ASSERT_SUCCESS(rte_bbdev_queue_intr_enable(tp->dev_id, queue_id),
+ "Failed to enable interrupts for dev: %u, queue_id: %u",
+ tp->dev_id, queue_id);
+
+ rte_bbdev_info_get(tp->dev_id, &info);
+ bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ rte_atomic16_clear(&tp->processing_status);
+ rte_atomic16_clear(&tp->nb_dequeued);
+
+ while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+ rte_pause();
+
+ tp->start_time = rte_rdtsc_precise();
+ for (enqueued = 0; enqueued < num_to_process;) {
+
+ uint16_t num_to_enq = burst_sz;
+
+ if (unlikely(num_to_process - enqueued < num_to_enq))
+ num_to_enq = num_to_process - enqueued;
+
+ ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp, ops,
+ num_to_enq);
+ if (ret != 0) {
+ allocs_failed++;
+ continue;
+ }
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_enc_op(ops, num_to_enq, enqueued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ tp->op_params->ref_enc_op);
+
+ enqueued += rte_bbdev_enqueue_enc_ops(tp->dev_id, queue_id, ops,
+ num_to_enq);
+
+ rte_bbdev_enc_op_free_bulk(ops, num_to_enq);
+ }
+
+ if (allocs_failed > 0)
+ printf("WARNING: op allocations failed: %u times\n",
+ allocs_failed);
+
+ return TEST_SUCCESS;
+}
+
+static int
+throughput_pmd_lcore_dec(void *arg)
+{
+ struct thread_params *tp = arg;
+ unsigned int enqueued, dequeued;
+ struct rte_bbdev_dec_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+ uint64_t total_time, start_time;
+ const uint16_t queue_id = tp->queue_id;
+ const uint16_t burst_sz = tp->op_params->burst_sz;
+ const uint16_t num_to_process = tp->op_params->num_to_process;
+ struct rte_bbdev_dec_op *ref_op = tp->op_params->ref_dec_op;
+ struct test_buffers *bufs = NULL;
+ unsigned int allocs_failed = 0;
+ int ret;
+ struct rte_bbdev_info info;
+
+ /* Input length in bytes, million operations per second, million bits
+ * per second.
+ */
+ double in_len;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ rte_bbdev_info_get(tp->dev_id, &info);
+ bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+ rte_pause();
+
+ start_time = rte_rdtsc_precise();
+ for (enqueued = 0, dequeued = 0; dequeued < num_to_process;) {
+ uint16_t deq;
+
+ if (likely(enqueued < num_to_process)) {
+
+ uint16_t num_to_enq = burst_sz;
+
+ if (unlikely(num_to_process - enqueued < num_to_enq))
+ num_to_enq = num_to_process - enqueued;
+
+ ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp,
+ ops_enq, num_to_enq);
+ if (ret != 0) {
+ allocs_failed++;
+ goto do_dequeue;
+ }
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_dec_op(ops_enq, num_to_enq,
+ enqueued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ bufs->soft_outputs,
+ ref_op);
+
+ enqueued += rte_bbdev_enqueue_dec_ops(tp->dev_id,
+ queue_id, ops_enq, num_to_enq);
+ }
+do_dequeue:
+ deq = rte_bbdev_dequeue_dec_ops(tp->dev_id, queue_id, ops_deq,
+ burst_sz);
+ dequeued += deq;
+ rte_bbdev_dec_op_free_bulk(ops_enq, deq);
+ }
+ total_time = rte_rdtsc_precise() - start_time;
+
+ if (allocs_failed > 0)
+ printf("WARNING: op allocations failed: %u times\n",
+ allocs_failed);
+
+ TEST_ASSERT(enqueued == dequeued, "enqueued (%u) != dequeued (%u)",
+ enqueued, dequeued);
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+ ret = validate_dec_buffers(ref_op, bufs, num_to_process);
+ TEST_ASSERT_SUCCESS(ret, "Buffers validation failed");
+ }
+
+ in_len = ref_op->turbo_dec.input.length;
+ tp->mops = ((double)num_to_process / 1000000.0) /
+ ((double)total_time / (double)rte_get_tsc_hz());
+ tp->mbps = ((double)num_to_process * in_len * 8 / 1000000.0) /
+ ((double)total_time / (double)rte_get_tsc_hz());
+
+ return TEST_SUCCESS;
+}
+
+static int
+throughput_pmd_lcore_enc(void *arg)
+{
+ struct thread_params *tp = arg;
+ unsigned int enqueued, dequeued;
+ struct rte_bbdev_enc_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+ uint64_t total_time, start_time;
+ const uint16_t queue_id = tp->queue_id;
+ const uint16_t burst_sz = tp->op_params->burst_sz;
+ const uint16_t num_to_process = tp->op_params->num_to_process;
+ struct rte_bbdev_enc_op *ref_op = tp->op_params->ref_enc_op;
+ struct test_buffers *bufs = NULL;
+ unsigned int allocs_failed = 0;
+ int ret;
+ struct rte_bbdev_info info;
+
+ /* Input length in bytes, million operations per second, million bits
+ * per second.
+ */
+ double in_len;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ rte_bbdev_info_get(tp->dev_id, &info);
+ bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ while (rte_atomic16_read(&tp->op_params->sync) == SYNC_WAIT)
+ rte_pause();
+
+ start_time = rte_rdtsc_precise();
+ for (enqueued = 0, dequeued = 0; dequeued < num_to_process;) {
+ uint16_t deq;
+
+ if (likely(enqueued < num_to_process)) {
+
+ uint16_t num_to_enq = burst_sz;
+
+ if (unlikely(num_to_process - enqueued < num_to_enq))
+ num_to_enq = num_to_process - enqueued;
+
+ ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp,
+ ops_enq, num_to_enq);
+ if (ret != 0) {
+ allocs_failed++;
+ goto do_dequeue;
+ }
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_enc_op(ops_enq, num_to_enq,
+ enqueued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ ref_op);
+
+ enqueued += rte_bbdev_enqueue_enc_ops(tp->dev_id,
+ queue_id, ops_enq, num_to_enq);
+ }
+do_dequeue:
+ deq = rte_bbdev_dequeue_enc_ops(tp->dev_id, queue_id, ops_deq,
+ burst_sz);
+ dequeued += deq;
+ rte_bbdev_enc_op_free_bulk(ops_enq, deq);
+ }
+ total_time = rte_rdtsc_precise() - start_time;
+
+ if (allocs_failed > 0)
+ printf("WARNING: op allocations failed: %u times\n",
+ allocs_failed);
+
+ TEST_ASSERT(enqueued == dequeued, "enqueued (%u) != dequeued (%u)",
+ enqueued, dequeued);
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+ ret = validate_enc_buffers(bufs, num_to_process);
+ TEST_ASSERT_SUCCESS(ret, "Buffers validation failed");
+ }
+
+ in_len = ref_op->turbo_enc.input.length;
+
+ tp->mops = ((double)num_to_process / 1000000.0) /
+ ((double)total_time / (double)rte_get_tsc_hz());
+ tp->mbps = ((double)num_to_process * in_len * 8 / 1000000.0) /
+ ((double)total_time / (double)rte_get_tsc_hz());
+
+ return TEST_SUCCESS;
+}
+static void
+print_throughput(struct thread_params *t_params, unsigned int used_cores)
+{
+ unsigned int lcore_id, iter = 0;
+ double total_mops = 0, total_mbps = 0;
+
+ RTE_LCORE_FOREACH(lcore_id) {
+ if (iter++ >= used_cores)
+ break;
+ printf("\tlcore_id: %u, throughput: %.8lg MOPS, %.8lg Mbps\n",
+ lcore_id, t_params[lcore_id].mops, t_params[lcore_id].mbps);
+ total_mops += t_params[lcore_id].mops;
+ total_mbps += t_params[lcore_id].mbps;
+ }
+ printf(
+ "\n\tTotal stats for %u cores: throughput: %.8lg MOPS, %.8lg Mbps\n",
+ used_cores, total_mops, total_mbps);
+}
+
+/*
+ * Test function that determines how long an enqueue + dequeue of a burst
+ * takes on available lcores.
+ */
+static int
+throughput_test(struct active_device *ad,
+ struct test_op_params *op_params)
+{
+ int ret;
+ unsigned int lcore_id, used_cores = 0;
+ struct thread_params t_params[MAX_QUEUES];
+ struct rte_bbdev_info info;
+ lcore_function_t *throughput_function;
+ struct thread_params *tp;
+ uint16_t num_lcores;
+ const char *op_type_str;
+
+ rte_bbdev_info_get(ad->dev_id, &info);
+
+ op_type_str = rte_bbdev_op_type_str(test_vector.op_type);
+ TEST_ASSERT_NOT_NULL(op_type_str, "Invalid op type: %u",
+ test_vector.op_type);
+
+ printf(
+ "Throughput test: dev: %s, nb_queues: %u, burst size: %u, num ops: %u, num_lcores: %u, op type: %s, int mode: %s, GHz: %lg\n",
+ info.dev_name, ad->nb_queues, op_params->burst_sz,
+ op_params->num_to_process, op_params->num_lcores,
+ op_type_str,
+ intr_enabled ? "Interrupt mode" : "PMD mode",
+ (double)rte_get_tsc_hz() / 1000000000.0);
+
+ /* Set number of lcores */
+ num_lcores = (ad->nb_queues < (op_params->num_lcores))
+ ? ad->nb_queues
+ : op_params->num_lcores;
+
+ if (intr_enabled) {
+ if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+ throughput_function = throughput_intr_lcore_dec;
+ else
+ throughput_function = throughput_intr_lcore_enc;
+
+ /* Dequeue interrupt callback registration */
+ rte_bbdev_callback_register(ad->dev_id, RTE_BBDEV_EVENT_DEQUEUE,
+ dequeue_event_callback,
+ &t_params);
+ } else {
+ if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
+ throughput_function = throughput_pmd_lcore_dec;
+ else
+ throughput_function = throughput_pmd_lcore_enc;
+ }
+
+ rte_atomic16_set(&op_params->sync, SYNC_WAIT);
+
+ t_params[rte_lcore_id()].dev_id = ad->dev_id;
+ t_params[rte_lcore_id()].op_params = op_params;
+ t_params[rte_lcore_id()].queue_id =
+ ad->queue_ids[used_cores++];
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (used_cores >= num_lcores)
+ break;
+
+ t_params[lcore_id].dev_id = ad->dev_id;
+ t_params[lcore_id].op_params = op_params;
+ t_params[lcore_id].queue_id = ad->queue_ids[used_cores++];
+
+ rte_eal_remote_launch(throughput_function, &t_params[lcore_id],
+ lcore_id);
+ }
+
+ rte_atomic16_set(&op_params->sync, SYNC_START);
+ ret = throughput_function(&t_params[rte_lcore_id()]);
+
+ /* Master core is always used */
+ used_cores = 1;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (used_cores++ >= num_lcores)
+ break;
+
+ ret |= rte_eal_wait_lcore(lcore_id);
+ }
+
+ /* Return if test failed */
+ if (ret)
+ return ret;
+
+ /* Print throughput if interrupts are disabled and test passed */
+ if (!intr_enabled) {
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ print_throughput(t_params, num_lcores);
+ return ret;
+ }
+
+ /* In interrupt TC we need to wait for the interrupt callback to deqeue
+ * all pending operations. Skip waiting for queues which reported an
+ * error using processing_status variable.
+ * Wait for master lcore operations.
+ */
+ tp = &t_params[rte_lcore_id()];
+ while ((rte_atomic16_read(&tp->nb_dequeued) <
+ op_params->num_to_process) &&
+ (rte_atomic16_read(&tp->processing_status) !=
+ TEST_FAILED))
+ rte_pause();
+
+ ret |= rte_atomic16_read(&tp->processing_status);
+
+ /* Wait for slave lcores operations */
+ used_cores = 1;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ tp = &t_params[lcore_id];
+ if (used_cores++ >= num_lcores)
+ break;
+
+ while ((rte_atomic16_read(&tp->nb_dequeued) <
+ op_params->num_to_process) &&
+ (rte_atomic16_read(&tp->processing_status) !=
+ TEST_FAILED))
+ rte_pause();
+
+ ret |= rte_atomic16_read(&tp->processing_status);
+ }
+
+ /* Print throughput if test passed */
+ if (!ret && test_vector.op_type != RTE_BBDEV_OP_NONE)
+ print_throughput(t_params, num_lcores);
+
+ return ret;
+}
+
+static int
+operation_latency_test_dec(struct rte_mempool *mempool,
+ struct test_buffers *bufs, struct rte_bbdev_dec_op *ref_op,
+ int vector_mask, uint16_t dev_id, uint16_t queue_id,
+ const uint16_t num_to_process, uint16_t burst_sz,
+ uint64_t *total_time)
+{
+ int ret = TEST_SUCCESS;
+ uint16_t i, j, dequeued;
+ struct rte_bbdev_dec_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+ uint64_t start_time = 0;
+
+ for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+ uint16_t enq = 0, deq = 0;
+ bool first_time = true;
+
+ if (unlikely(num_to_process - dequeued < burst_sz))
+ burst_sz = num_to_process - dequeued;
+
+ rte_bbdev_dec_op_alloc_bulk(mempool, ops_enq, burst_sz);
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_dec_op(ops_enq, burst_sz, dequeued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ bufs->soft_outputs,
+ ref_op);
+
+ /* Set counter to validate the ordering */
+ for (j = 0; j < burst_sz; ++j)
+ ops_enq[j]->opaque_data = (void *)(uintptr_t)j;
+
+ start_time = rte_rdtsc_precise();
+
+ enq = rte_bbdev_enqueue_dec_ops(dev_id, queue_id, &ops_enq[enq],
+ burst_sz);
+ TEST_ASSERT(enq == burst_sz,
+ "Error enqueueing burst, expected %u, got %u",
+ burst_sz, enq);
+
+ /* Dequeue */
+ do {
+ deq += rte_bbdev_dequeue_dec_ops(dev_id, queue_id,
+ &ops_deq[deq], burst_sz - deq);
+ if (likely(first_time && (deq > 0))) {
+ *total_time += rte_rdtsc_precise() - start_time;
+ first_time = false;
+ }
+ } while (unlikely(burst_sz != deq));
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+ ret = validate_dec_op(ops_deq, burst_sz, ref_op,
+ vector_mask);
+ TEST_ASSERT_SUCCESS(ret, "Validation failed!");
+ }
+
+ rte_bbdev_dec_op_free_bulk(ops_enq, deq);
+ dequeued += deq;
+ }
+
+ return i;
+}
+
+static int
+operation_latency_test_enc(struct rte_mempool *mempool,
+ struct test_buffers *bufs, struct rte_bbdev_enc_op *ref_op,
+ uint16_t dev_id, uint16_t queue_id,
+ const uint16_t num_to_process, uint16_t burst_sz,
+ uint64_t *total_time)
+{
+ int ret = TEST_SUCCESS;
+ uint16_t i, j, dequeued;
+ struct rte_bbdev_enc_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+ uint64_t start_time = 0;
+
+ for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+ uint16_t enq = 0, deq = 0;
+ bool first_time = true;
+
+ if (unlikely(num_to_process - dequeued < burst_sz))
+ burst_sz = num_to_process - dequeued;
+
+ rte_bbdev_enc_op_alloc_bulk(mempool, ops_enq, burst_sz);
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_enc_op(ops_enq, burst_sz, dequeued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ ref_op);
+
+ /* Set counter to validate the ordering */
+ for (j = 0; j < burst_sz; ++j)
+ ops_enq[j]->opaque_data = (void *)(uintptr_t)j;
+
+ start_time = rte_rdtsc_precise();
+
+ enq = rte_bbdev_enqueue_enc_ops(dev_id, queue_id, &ops_enq[enq],
+ burst_sz);
+ TEST_ASSERT(enq == burst_sz,
+ "Error enqueueing burst, expected %u, got %u",
+ burst_sz, enq);
+
+ /* Dequeue */
+ do {
+ deq += rte_bbdev_dequeue_enc_ops(dev_id, queue_id,
+ &ops_deq[deq], burst_sz - deq);
+ if (likely(first_time && (deq > 0))) {
+ *total_time += rte_rdtsc_precise() - start_time;
+ first_time = false;
+ }
+ } while (unlikely(burst_sz != deq));
+
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+ ret = validate_enc_op(ops_deq, burst_sz, ref_op);
+ TEST_ASSERT_SUCCESS(ret, "Validation failed!");
+ }
+
+ rte_bbdev_enc_op_free_bulk(ops_enq, deq);
+ dequeued += deq;
+ }
+
+ return i;
+}
+
+static int
+operation_latency_test(struct active_device *ad,
+ struct test_op_params *op_params)
+{
+ int iter;
+ uint16_t burst_sz = op_params->burst_sz;
+ const uint16_t num_to_process = op_params->num_to_process;
+ const enum rte_bbdev_op_type op_type = test_vector.op_type;
+ const uint16_t queue_id = ad->queue_ids[0];
+ struct test_buffers *bufs = NULL;
+ struct rte_bbdev_info info;
+ uint64_t total_time = 0;
+ const char *op_type_str;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ rte_bbdev_info_get(ad->dev_id, &info);
+ bufs = &op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ op_type_str = rte_bbdev_op_type_str(op_type);
+ TEST_ASSERT_NOT_NULL(op_type_str, "Invalid op type: %u", op_type);
+
+ printf(
+ "Validation/Latency test: dev: %s, burst size: %u, num ops: %u, op type: %s\n",
+ info.dev_name, burst_sz, num_to_process, op_type_str);
+
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+ iter = operation_latency_test_dec(op_params->mp, bufs,
+ op_params->ref_dec_op, op_params->vector_mask,
+ ad->dev_id, queue_id, num_to_process,
+ burst_sz, &total_time);
+ else
+ iter = operation_latency_test_enc(op_params->mp, bufs,
+ op_params->ref_enc_op, ad->dev_id, queue_id,
+ num_to_process, burst_sz, &total_time);
+
+ if (iter < 0)
+ return TEST_FAILED;
+
+ printf("\toperation avg. latency: %lg cycles, %lg us\n",
+ (double)total_time / (double)iter,
+ (double)(total_time * 1000000) / (double)iter /
+ (double)rte_get_tsc_hz());
+
+ return TEST_SUCCESS;
+}
+
+static int
+offload_latency_test_dec(struct rte_mempool *mempool, struct test_buffers *bufs,
+ struct rte_bbdev_dec_op *ref_op, uint16_t dev_id,
+ uint16_t queue_id, const uint16_t num_to_process,
+ uint16_t burst_sz, uint64_t *enq_total_time,
+ uint64_t *deq_total_time)
+{
+ int i, dequeued;
+ struct rte_bbdev_dec_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+ uint64_t enq_start_time, deq_start_time;
+
+ for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+ uint16_t enq = 0, deq = 0;
+
+ if (unlikely(num_to_process - dequeued < burst_sz))
+ burst_sz = num_to_process - dequeued;
+
+ rte_bbdev_dec_op_alloc_bulk(mempool, ops_enq, burst_sz);
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_dec_op(ops_enq, burst_sz, dequeued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ bufs->soft_outputs,
+ ref_op);
+
+ /* Start time measurment for enqueue function offload latency */
+ enq_start_time = rte_rdtsc();
+ do {
+ enq += rte_bbdev_enqueue_dec_ops(dev_id, queue_id,
+ &ops_enq[enq], burst_sz - enq);
+ } while (unlikely(burst_sz != enq));
+ *enq_total_time += rte_rdtsc() - enq_start_time;
+
+ /* ensure enqueue has been completed */
+ rte_delay_ms(10);
+
+ /* Start time measurment for dequeue function offload latency */
+ deq_start_time = rte_rdtsc();
+ do {
+ deq += rte_bbdev_dequeue_dec_ops(dev_id, queue_id,
+ &ops_deq[deq], burst_sz - deq);
+ } while (unlikely(burst_sz != deq));
+ *deq_total_time += rte_rdtsc() - deq_start_time;
+
+ rte_bbdev_dec_op_free_bulk(ops_enq, deq);
+ dequeued += deq;
+ }
+
+ return i;
+}
+
+static int
+offload_latency_test_enc(struct rte_mempool *mempool, struct test_buffers *bufs,
+ struct rte_bbdev_enc_op *ref_op, uint16_t dev_id,
+ uint16_t queue_id, const uint16_t num_to_process,
+ uint16_t burst_sz, uint64_t *enq_total_time,
+ uint64_t *deq_total_time)
+{
+ int i, dequeued;
+ struct rte_bbdev_enc_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+ uint64_t enq_start_time, deq_start_time;
+
+ for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+ uint16_t enq = 0, deq = 0;
+
+ if (unlikely(num_to_process - dequeued < burst_sz))
+ burst_sz = num_to_process - dequeued;
+
+ rte_bbdev_enc_op_alloc_bulk(mempool, ops_enq, burst_sz);
+ if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+ copy_reference_enc_op(ops_enq, burst_sz, dequeued,
+ bufs->inputs,
+ bufs->hard_outputs,
+ ref_op);
+
+ /* Start time measurment for enqueue function offload latency */
+ enq_start_time = rte_rdtsc();
+ do {
+ enq += rte_bbdev_enqueue_enc_ops(dev_id, queue_id,
+ &ops_enq[enq], burst_sz - enq);
+ } while (unlikely(burst_sz != enq));
+ *enq_total_time += rte_rdtsc() - enq_start_time;
+
+ /* ensure enqueue has been completed */
+ rte_delay_ms(10);
+
+ /* Start time measurment for dequeue function offload latency */
+ deq_start_time = rte_rdtsc();
+ do {
+ deq += rte_bbdev_dequeue_enc_ops(dev_id, queue_id,
+ &ops_deq[deq], burst_sz - deq);
+ } while (unlikely(burst_sz != deq));
+ *deq_total_time += rte_rdtsc() - deq_start_time;
+
+ rte_bbdev_enc_op_free_bulk(ops_enq, deq);
+ dequeued += deq;
+ }
+
+ return i;
+}
+
+static int
+offload_latency_test(struct active_device *ad,
+ struct test_op_params *op_params)
+{
+ int iter;
+ uint64_t enq_total_time = 0, deq_total_time = 0;
+ uint16_t burst_sz = op_params->burst_sz;
+ const uint16_t num_to_process = op_params->num_to_process;
+ const enum rte_bbdev_op_type op_type = test_vector.op_type;
+ const uint16_t queue_id = ad->queue_ids[0];
+ struct test_buffers *bufs = NULL;
+ struct rte_bbdev_info info;
+ const char *op_type_str;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ rte_bbdev_info_get(ad->dev_id, &info);
+ bufs = &op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+ op_type_str = rte_bbdev_op_type_str(op_type);
+ TEST_ASSERT_NOT_NULL(op_type_str, "Invalid op type: %u", op_type);
+
+ printf(
+ "Offload latency test: dev: %s, burst size: %u, num ops: %u, op type: %s\n",
+ info.dev_name, burst_sz, num_to_process, op_type_str);
+
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+ iter = offload_latency_test_dec(op_params->mp, bufs,
+ op_params->ref_dec_op, ad->dev_id, queue_id,
+ num_to_process, burst_sz, &enq_total_time,
+ &deq_total_time);
+ else
+ iter = offload_latency_test_enc(op_params->mp, bufs,
+ op_params->ref_enc_op, ad->dev_id, queue_id,
+ num_to_process, burst_sz, &enq_total_time,
+ &deq_total_time);
+
+ if (iter < 0)
+ return TEST_FAILED;
+
+ printf("\tenq offload avg. latency: %lg cycles, %lg us\n",
+ (double)enq_total_time / (double)iter,
+ (double)(enq_total_time * 1000000) / (double)iter /
+ (double)rte_get_tsc_hz());
+
+ printf("\tdeq offload avg. latency: %lg cycles, %lg us\n",
+ (double)deq_total_time / (double)iter,
+ (double)(deq_total_time * 1000000) / (double)iter /
+ (double)rte_get_tsc_hz());
+
+ return TEST_SUCCESS;
+}
+
+static int
+offload_latency_empty_q_test_dec(uint16_t dev_id, uint16_t queue_id,
+ const uint16_t num_to_process, uint16_t burst_sz,
+ uint64_t *deq_total_time)
+{
+ int i, deq_total;
+ struct rte_bbdev_dec_op *ops[MAX_BURST];
+ uint64_t deq_start_time;
+
+ /* Test deq offload latency from an empty queue */
+ deq_start_time = rte_rdtsc_precise();
+ for (i = 0, deq_total = 0; deq_total < num_to_process;
+ ++i, deq_total += burst_sz) {
+ if (unlikely(num_to_process - deq_total < burst_sz))
+ burst_sz = num_to_process - deq_total;
+ rte_bbdev_dequeue_dec_ops(dev_id, queue_id, ops, burst_sz);
+ }
+ *deq_total_time = rte_rdtsc_precise() - deq_start_time;
+
+ return i;
+}
+
+static int
+offload_latency_empty_q_test_enc(uint16_t dev_id, uint16_t queue_id,
+ const uint16_t num_to_process, uint16_t burst_sz,
+ uint64_t *deq_total_time)
+{
+ int i, deq_total;
+ struct rte_bbdev_enc_op *ops[MAX_BURST];
+ uint64_t deq_start_time;
+
+ /* Test deq offload latency from an empty queue */
+ deq_start_time = rte_rdtsc_precise();
+ for (i = 0, deq_total = 0; deq_total < num_to_process;
+ ++i, deq_total += burst_sz) {
+ if (unlikely(num_to_process - deq_total < burst_sz))
+ burst_sz = num_to_process - deq_total;
+ rte_bbdev_dequeue_enc_ops(dev_id, queue_id, ops, burst_sz);
+ }
+ *deq_total_time = rte_rdtsc_precise() - deq_start_time;
+
+ return i;
+}
+
+static int
+offload_latency_empty_q_test(struct active_device *ad,
+ struct test_op_params *op_params)
+{
+ int iter;
+ uint64_t deq_total_time = 0;
+ uint16_t burst_sz = op_params->burst_sz;
+ const uint16_t num_to_process = op_params->num_to_process;
+ const enum rte_bbdev_op_type op_type = test_vector.op_type;
+ const uint16_t queue_id = ad->queue_ids[0];
+ struct rte_bbdev_info info;
+ const char *op_type_str;
+
+ TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST),
+ "BURST_SIZE should be <= %u", MAX_BURST);
+
+ rte_bbdev_info_get(ad->dev_id, &info);
+
+ op_type_str = rte_bbdev_op_type_str(op_type);
+ TEST_ASSERT_NOT_NULL(op_type_str, "Invalid op type: %u", op_type);
+
+ printf(
+ "Offload latency empty dequeue test: dev: %s, burst size: %u, num ops: %u, op type: %s\n",
+ info.dev_name, burst_sz, num_to_process, op_type_str);
+
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+ iter = offload_latency_empty_q_test_dec(ad->dev_id, queue_id,
+ num_to_process, burst_sz, &deq_total_time);
+ else
+ iter = offload_latency_empty_q_test_enc(ad->dev_id, queue_id,
+ num_to_process, burst_sz, &deq_total_time);
+
+ if (iter < 0)
+ return TEST_FAILED;
+
+ printf("\tempty deq offload avg. latency: %lg cycles, %lg us\n",
+ (double)deq_total_time / (double)iter,
+ (double)(deq_total_time * 1000000) / (double)iter /
+ (double)rte_get_tsc_hz());
+
+ return TEST_SUCCESS;
+}
+
+static int
+throughput_tc(void)
+{
+ return run_test_case(throughput_test);
+}
+
+static int
+offload_latency_tc(void)
+{
+ return run_test_case(offload_latency_test);
+}
+
+static int
+offload_latency_empty_q_tc(void)
+{
+ return run_test_case(offload_latency_empty_q_test);
+}
+
+static int
+operation_latency_tc(void)
+{
+ return run_test_case(operation_latency_test);
+}
+
+static int
+interrupt_tc(void)
+{
+ return run_test_case(throughput_test);
+}
+
+static struct unit_test_suite bbdev_throughput_testsuite = {
+ .suite_name = "BBdev Throughput Tests",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown, throughput_tc),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static struct unit_test_suite bbdev_validation_testsuite = {
+ .suite_name = "BBdev Validation Tests",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown, operation_latency_tc),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static struct unit_test_suite bbdev_latency_testsuite = {
+ .suite_name = "BBdev Latency Tests",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown, offload_latency_tc),
+ TEST_CASE_ST(ut_setup, ut_teardown, offload_latency_empty_q_tc),
+ TEST_CASE_ST(ut_setup, ut_teardown, operation_latency_tc),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static struct unit_test_suite bbdev_interrupt_testsuite = {
+ .suite_name = "BBdev Interrupt Tests",
+ .setup = interrupt_testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown, interrupt_tc),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+REGISTER_TEST_COMMAND(throughput, bbdev_throughput_testsuite);
+REGISTER_TEST_COMMAND(validation, bbdev_validation_testsuite);
+REGISTER_TEST_COMMAND(latency, bbdev_latency_testsuite);
+REGISTER_TEST_COMMAND(interrupt, bbdev_interrupt_testsuite);
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
new file mode 100644
index 0000000..2d0852c
--- /dev/null
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -0,0 +1,937 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifdef RTE_EXEC_ENV_BSDAPP
+ #define _WITH_GETLINE
+#endif
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_malloc.h>
+
+#include "test_bbdev_vector.h"
+
+#define VALUE_DELIMITER ","
+#define ENTRY_DELIMITER "="
+
+const char *op_data_prefixes[] = {
+ "input",
+ "soft_output",
+ "hard_output",
+};
+
+/* trim leading and trailing spaces */
+static void
+trim_space(char *str)
+{
+ char *start, *end;
+
+ for (start = str; *start; start++) {
+ if (!isspace((unsigned char) start[0]))
+ break;
+ }
+
+ for (end = start + strlen(start); end > start + 1; end--) {
+ if (!isspace((unsigned char) end[-1]))
+ break;
+ }
+
+ *end = 0;
+
+ /* Shift from "start" to the beginning of the string */
+ if (start > str)
+ memmove(str, start, (end - start) + 1);
+}
+
+static bool
+starts_with(const char *str, const char *pre)
+{
+ return strncmp(pre, str, strlen(pre)) == 0;
+}
+
+/* tokenization test values separated by a comma */
+static int
+parse_values(char *tokens, uint32_t **data, uint32_t *data_length)
+{
+ uint32_t n_tokens = 0;
+ uint32_t data_size = 32;
+
+ uint32_t *values, *values_resized;
+ char *tok, *error = NULL;
+
+ tok = strtok(tokens, VALUE_DELIMITER);
+ if (tok == NULL)
+ return -1;
+
+ values = (uint32_t *)
+ rte_zmalloc(NULL, sizeof(uint32_t) * data_size, 0);
+ if (values == NULL)
+ return -1;
+
+ while (tok != NULL) {
+ values_resized = NULL;
+
+ if (n_tokens >= data_size) {
+ data_size *= 2;
+
+ values_resized = (uint32_t *) rte_realloc(values,
+ sizeof(uint32_t) * data_size, 0);
+ if (values_resized == NULL) {
+ rte_free(values);
+ return -1;
+ }
+ values = values_resized;
+ }
+
+ values[n_tokens] = (uint32_t) strtoul(tok, &error, 0);
+ if ((error == NULL) || (*error != '\0')) {
+ printf("Failed with convert '%s'\n", tok);
+ rte_free(values);
+ return -1;
+ }
+
+ *data_length = *data_length + (strlen(tok) - strlen("0x"))/2;
+
+ tok = strtok(NULL, VALUE_DELIMITER);
+ if (tok == NULL)
+ break;
+
+ n_tokens++;
+ }
+
+ values_resized = (uint32_t *) rte_realloc(values,
+ sizeof(uint32_t) * (n_tokens + 1), 0);
+
+ if (values_resized == NULL) {
+ rte_free(values);
+ return -1;
+ }
+
+ *data = values_resized;
+
+ return 0;
+}
+
+/* convert turbo decoder flag from string to unsigned long int*/
+static int
+op_decoder_flag_strtoul(char *token, uint32_t *op_flag_value)
+{
+ if (!strcmp(token, "RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE"))
+ *op_flag_value = RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_CRC_TYPE_24B"))
+ *op_flag_value = RTE_BBDEV_TURBO_CRC_TYPE_24B;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_EQUALIZER"))
+ *op_flag_value = RTE_BBDEV_TURBO_EQUALIZER;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_SOFT_OUT_SATURATE"))
+ *op_flag_value = RTE_BBDEV_TURBO_SOFT_OUT_SATURATE;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_HALF_ITERATION_EVEN"))
+ *op_flag_value = RTE_BBDEV_TURBO_HALF_ITERATION_EVEN;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH"))
+ *op_flag_value = RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_SOFT_OUTPUT"))
+ *op_flag_value = RTE_BBDEV_TURBO_SOFT_OUTPUT;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_EARLY_TERMINATION"))
+ *op_flag_value = RTE_BBDEV_TURBO_EARLY_TERMINATION;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN"))
+ *op_flag_value = RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN"))
+ *op_flag_value = RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT"))
+ *op_flag_value = RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT"))
+ *op_flag_value = RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_MAP_DEC"))
+ *op_flag_value = RTE_BBDEV_TURBO_MAP_DEC;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_DEC_SCATTER_GATHER"))
+ *op_flag_value = RTE_BBDEV_TURBO_DEC_SCATTER_GATHER;
+ else {
+ printf("The given value is not a turbo decoder flag\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* convert turbo encoder flag from string to unsigned long int*/
+static int
+op_encoder_flag_strtoul(char *token, uint32_t *op_flag_value)
+{
+ if (!strcmp(token, "RTE_BBDEV_TURBO_RV_INDEX_BYPASS"))
+ *op_flag_value = RTE_BBDEV_TURBO_RV_INDEX_BYPASS;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_RATE_MATCH"))
+ *op_flag_value = RTE_BBDEV_TURBO_RATE_MATCH;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_CRC_24B_ATTACH"))
+ *op_flag_value = RTE_BBDEV_TURBO_CRC_24B_ATTACH;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_CRC_24A_ATTACH"))
+ *op_flag_value = RTE_BBDEV_TURBO_CRC_24A_ATTACH;
+ else if (!strcmp(token, "RTE_BBDEV_TURBO_ENC_SCATTER_GATHER"))
+ *op_flag_value = RTE_BBDEV_TURBO_ENC_SCATTER_GATHER;
+ else {
+ printf("The given value is not a turbo encoder flag\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* tokenization turbo decoder/encoder flags values separated by a comma */
+static int
+parse_turbo_flags(char *tokens, uint32_t *op_flags,
+ enum rte_bbdev_op_type op_type)
+{
+ char *tok = NULL;
+ uint32_t op_flag_value = 0;
+
+ tok = strtok(tokens, VALUE_DELIMITER);
+ if (tok == NULL)
+ return -1;
+
+ while (tok != NULL) {
+ trim_space(tok);
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC) {
+ if (op_decoder_flag_strtoul(tok, &op_flag_value) == -1)
+ return -1;
+ } else if (op_type == RTE_BBDEV_OP_TURBO_ENC) {
+ if (op_encoder_flag_strtoul(tok, &op_flag_value) == -1)
+ return -1;
+ } else {
+ return -1;
+ }
+
+ *op_flags = *op_flags | op_flag_value;
+
+ tok = strtok(NULL, VALUE_DELIMITER);
+ if (tok == NULL)
+ break;
+ }
+
+ return 0;
+}
+
+/* convert turbo encoder/decoder op_type from string to enum*/
+static int
+op_turbo_type_strtol(char *token, enum rte_bbdev_op_type *op_type)
+{
+ trim_space(token);
+ if (!strcmp(token, "RTE_BBDEV_OP_TURBO_DEC"))
+ *op_type = RTE_BBDEV_OP_TURBO_DEC;
+ else if (!strcmp(token, "RTE_BBDEV_OP_TURBO_ENC"))
+ *op_type = RTE_BBDEV_OP_TURBO_ENC;
+ else if (!strcmp(token, "RTE_BBDEV_OP_NONE"))
+ *op_type = RTE_BBDEV_OP_NONE;
+ else {
+ printf("Not valid turbo op_type: '%s'\n", token);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* tokenization expected status values separated by a comma */
+static int
+parse_expected_status(char *tokens, int *status, enum rte_bbdev_op_type op_type)
+{
+ char *tok = NULL;
+ bool status_ok = false;
+
+ tok = strtok(tokens, VALUE_DELIMITER);
+ if (tok == NULL)
+ return -1;
+
+ while (tok != NULL) {
+ trim_space(tok);
+ if (!strcmp(tok, "OK"))
+ status_ok = true;
+ else if (!strcmp(tok, "DMA"))
+ *status = *status | (1 << RTE_BBDEV_DRV_ERROR);
+ else if (!strcmp(tok, "FCW"))
+ *status = *status | (1 << RTE_BBDEV_DATA_ERROR);
+ else if (!strcmp(tok, "CRC")) {
+ if (op_type == RTE_BBDEV_OP_TURBO_DEC)
+ *status = *status | (1 << RTE_BBDEV_CRC_ERROR);
+ else {
+ printf(
+ "CRC is only a valid value for turbo decoder\n");
+ return -1;
+ }
+ } else {
+ printf("Not valid status: '%s'\n", tok);
+ return -1;
+ }
+
+ tok = strtok(NULL, VALUE_DELIMITER);
+ if (tok == NULL)
+ break;
+ }
+
+ if (status_ok && *status != 0) {
+ printf(
+ "Not valid status values. Cannot be OK and ERROR at the same time.\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* parse ops data entry (there can be more than 1 input entry, each will be
+ * contained in a separate op_data_buf struct)
+ */
+static int
+parse_data_entry(const char *key_token, char *token,
+ struct test_bbdev_vector *vector, enum op_data_type type,
+ const char *prefix)
+{
+ int ret;
+ uint32_t data_length = 0;
+ uint32_t *data = NULL;
+ unsigned int id;
+ struct op_data_buf *op_data;
+ unsigned int *nb_ops;
+
+ if (type > DATA_NUM_TYPES) {
+ printf("Unknown op type: %d!\n", type);
+ return -1;
+ }
+
+ op_data = vector->entries[type].segments;
+ nb_ops = &vector->entries[type].nb_segments;
+
+ if (*nb_ops >= RTE_BBDEV_MAX_CODE_BLOCKS) {
+ printf("Too many segments (code blocks defined): %u, max %d!\n",
+ *nb_ops, RTE_BBDEV_MAX_CODE_BLOCKS);
+ return -1;
+ }
+
+ if (sscanf(key_token + strlen(prefix), "%u", &id) != 1) {
+ printf("Missing ID of %s\n", prefix);
+ return -1;
+ }
+ if (id != *nb_ops) {
+ printf(
+ "Please order data entries sequentially, i.e. %s0, %s1, ...\n",
+ prefix, prefix);
+ return -1;
+ }
+
+ /* Clear new op data struct */
+ memset(op_data + *nb_ops, 0, sizeof(struct op_data_buf));
+
+ ret = parse_values(token, &data, &data_length);
+ if (!ret) {
+ op_data[*nb_ops].addr = data;
+ op_data[*nb_ops].length = data_length;
+ ++(*nb_ops);
+ }
+
+ return ret;
+}
+
+/* parses turbo decoder parameters and assigns to global variable */
+static int
+parse_decoder_params(const char *key_token, char *token,
+ struct test_bbdev_vector *vector)
+{
+ int ret = 0, status = 0;
+ uint32_t op_flags = 0;
+ char *err = NULL;
+
+ struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+
+ /* compare keys */
+ if (starts_with(key_token, op_data_prefixes[DATA_INPUT]))
+ ret = parse_data_entry(key_token, token, vector,
+ DATA_INPUT, op_data_prefixes[DATA_INPUT]);
+
+ else if (starts_with(key_token, op_data_prefixes[DATA_SOFT_OUTPUT]))
+ ret = parse_data_entry(key_token, token, vector,
+ DATA_SOFT_OUTPUT,
+ op_data_prefixes[DATA_SOFT_OUTPUT]);
+
+ else if (starts_with(key_token, op_data_prefixes[DATA_HARD_OUTPUT]))
+ ret = parse_data_entry(key_token, token, vector,
+ DATA_HARD_OUTPUT,
+ op_data_prefixes[DATA_HARD_OUTPUT]);
+ else if (!strcmp(key_token, "e")) {
+ vector->mask |= TEST_BBDEV_VF_E;
+ turbo_dec->cb_params.e = (uint32_t) strtoul(token, &err, 0);
+ } else if (!strcmp(key_token, "ea")) {
+ vector->mask |= TEST_BBDEV_VF_EA;
+ turbo_dec->tb_params.ea = (uint32_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "eb")) {
+ vector->mask |= TEST_BBDEV_VF_EB;
+ turbo_dec->tb_params.eb = (uint32_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "k")) {
+ vector->mask |= TEST_BBDEV_VF_K;
+ turbo_dec->cb_params.k = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "k_pos")) {
+ vector->mask |= TEST_BBDEV_VF_K_POS;
+ turbo_dec->tb_params.k_pos = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "k_neg")) {
+ vector->mask |= TEST_BBDEV_VF_K_NEG;
+ turbo_dec->tb_params.k_neg = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "c")) {
+ vector->mask |= TEST_BBDEV_VF_C;
+ turbo_dec->tb_params.c = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "c_neg")) {
+ vector->mask |= TEST_BBDEV_VF_C_NEG;
+ turbo_dec->tb_params.c_neg = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "cab")) {
+ vector->mask |= TEST_BBDEV_VF_CAB;
+ turbo_dec->tb_params.cab = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "rv_index")) {
+ vector->mask |= TEST_BBDEV_VF_RV_INDEX;
+ turbo_dec->rv_index = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "iter_max")) {
+ vector->mask |= TEST_BBDEV_VF_ITER_MAX;
+ turbo_dec->iter_max = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "iter_min")) {
+ vector->mask |= TEST_BBDEV_VF_ITER_MIN;
+ turbo_dec->iter_min = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "expected_iter_count")) {
+ vector->mask |= TEST_BBDEV_VF_EXPECTED_ITER_COUNT;
+ turbo_dec->iter_count = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "ext_scale")) {
+ vector->mask |= TEST_BBDEV_VF_EXT_SCALE;
+ turbo_dec->ext_scale = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "num_maps")) {
+ vector->mask |= TEST_BBDEV_VF_NUM_MAPS;
+ turbo_dec->num_maps = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "code_block_mode")) {
+ vector->mask |= TEST_BBDEV_VF_CODE_BLOCK_MODE;
+ turbo_dec->code_block_mode = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "op_flags")) {
+ vector->mask |= TEST_BBDEV_VF_OP_FLAGS;
+ ret = parse_turbo_flags(token, &op_flags,
+ vector->op_type);
+ if (!ret)
+ turbo_dec->op_flags = op_flags;
+ } else if (!strcmp(key_token, "expected_status")) {
+ vector->mask |= TEST_BBDEV_VF_EXPECTED_STATUS;
+ ret = parse_expected_status(token, &status, vector->op_type);
+ if (!ret)
+ vector->expected_status = status;
+ } else {
+ printf("Not valid dec key: '%s'\n", key_token);
+ return -1;
+ }
+
+ if (ret != 0) {
+ printf("Failed with convert '%s\t%s'\n", key_token, token);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* parses turbo encoder parameters and assigns to global variable */
+static int
+parse_encoder_params(const char *key_token, char *token,
+ struct test_bbdev_vector *vector)
+{
+ int ret = 0, status = 0;
+ uint32_t op_flags = 0;
+ char *err = NULL;
+
+
+ struct rte_bbdev_op_turbo_enc *turbo_enc = &vector->turbo_enc;
+
+ if (starts_with(key_token, op_data_prefixes[DATA_INPUT]))
+ ret = parse_data_entry(key_token, token, vector,
+ DATA_INPUT, op_data_prefixes[DATA_INPUT]);
+ else if (starts_with(key_token, "output"))
+ ret = parse_data_entry(key_token, token, vector,
+ DATA_HARD_OUTPUT, "output");
+ else if (!strcmp(key_token, "e")) {
+ vector->mask |= TEST_BBDEV_VF_E;
+ turbo_enc->cb_params.e = (uint32_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "ea")) {
+ vector->mask |= TEST_BBDEV_VF_EA;
+ turbo_enc->tb_params.ea = (uint32_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "eb")) {
+ vector->mask |= TEST_BBDEV_VF_EB;
+ turbo_enc->tb_params.eb = (uint32_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "k")) {
+ vector->mask |= TEST_BBDEV_VF_K;
+ turbo_enc->cb_params.k = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "k_neg")) {
+ vector->mask |= TEST_BBDEV_VF_K_NEG;
+ turbo_enc->tb_params.k_neg = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "k_pos")) {
+ vector->mask |= TEST_BBDEV_VF_K_POS;
+ turbo_enc->tb_params.k_pos = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "c_neg")) {
+ vector->mask |= TEST_BBDEV_VF_C_NEG;
+ turbo_enc->tb_params.c_neg = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "c")) {
+ vector->mask |= TEST_BBDEV_VF_C;
+ turbo_enc->tb_params.c = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "cab")) {
+ vector->mask |= TEST_BBDEV_VF_CAB;
+ turbo_enc->tb_params.cab = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "rv_index")) {
+ vector->mask |= TEST_BBDEV_VF_RV_INDEX;
+ turbo_enc->rv_index = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "ncb")) {
+ vector->mask |= TEST_BBDEV_VF_NCB;
+ turbo_enc->cb_params.ncb = (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "ncb_neg")) {
+ vector->mask |= TEST_BBDEV_VF_NCB_NEG;
+ turbo_enc->tb_params.ncb_neg =
+ (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "ncb_pos")) {
+ vector->mask |= TEST_BBDEV_VF_NCB_POS;
+ turbo_enc->tb_params.ncb_pos =
+ (uint16_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "r")) {
+ vector->mask |= TEST_BBDEV_VF_R;
+ turbo_enc->tb_params.r = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "code_block_mode")) {
+ vector->mask |= TEST_BBDEV_VF_CODE_BLOCK_MODE;
+ turbo_enc->code_block_mode = (uint8_t) strtoul(token, &err, 0);
+ ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+ } else if (!strcmp(key_token, "op_flags")) {
+ vector->mask |= TEST_BBDEV_VF_OP_FLAGS;
+ ret = parse_turbo_flags(token, &op_flags,
+ vector->op_type);
+ if (!ret)
+ turbo_enc->op_flags = op_flags;
+ } else if (!strcmp(key_token, "expected_status")) {
+ vector->mask |= TEST_BBDEV_VF_EXPECTED_STATUS;
+ ret = parse_expected_status(token, &status, vector->op_type);
+ if (!ret)
+ vector->expected_status = status;
+ } else {
+ printf("Not valid enc key: '%s'\n", key_token);
+ return -1;
+ }
+
+ if (ret != 0) {
+ printf("Failed with convert '%s\t%s'\n", key_token, token);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* checks the type of key and assigns data */
+static int
+parse_entry(char *entry, struct test_bbdev_vector *vector)
+{
+ int ret = 0;
+ char *token, *key_token;
+ enum rte_bbdev_op_type op_type = RTE_BBDEV_OP_NONE;
+
+ if (entry == NULL) {
+ printf("Expected entry value\n");
+ return -1;
+ }
+
+ /* get key */
+ token = strtok(entry, ENTRY_DELIMITER);
+ key_token = token;
+ /* get values for key */
+ token = strtok(NULL, ENTRY_DELIMITER);
+
+ if (key_token == NULL || token == NULL) {
+ printf("Expected 'key = values' but was '%.40s'..\n", entry);
+ return -1;
+ }
+ trim_space(key_token);
+
+ /* first key_token has to specify type of operation */
+ if (vector->op_type == RTE_BBDEV_OP_NONE) {
+ if (!strcmp(key_token, "op_type")) {
+ ret = op_turbo_type_strtol(token, &op_type);
+ if (!ret)
+ vector->op_type = op_type;
+ return (!ret) ? 0 : -1;
+ }
+ printf("First key_token (%s) does not specify op_type\n",
+ key_token);
+ return -1;
+ }
+
+ /* compare keys */
+ if (vector->op_type == RTE_BBDEV_OP_TURBO_DEC) {
+ if (parse_decoder_params(key_token, token, vector) == -1)
+ return -1;
+ } else if (vector->op_type == RTE_BBDEV_OP_TURBO_ENC) {
+ if (parse_encoder_params(key_token, token, vector) == -1)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+check_decoder_segments(struct test_bbdev_vector *vector)
+{
+ unsigned char i;
+ struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+
+ if (vector->entries[DATA_INPUT].nb_segments == 0)
+ return -1;
+
+ for (i = 0; i < vector->entries[DATA_INPUT].nb_segments; i++)
+ if (vector->entries[DATA_INPUT].segments[i].addr == NULL)
+ return -1;
+
+ if (vector->entries[DATA_HARD_OUTPUT].nb_segments == 0)
+ return -1;
+
+ for (i = 0; i < vector->entries[DATA_HARD_OUTPUT].nb_segments;
+ i++)
+ if (vector->entries[DATA_HARD_OUTPUT].segments[i].addr == NULL)
+ return -1;
+
+ if ((turbo_dec->op_flags & RTE_BBDEV_TURBO_SOFT_OUTPUT) &&
+ (vector->entries[DATA_SOFT_OUTPUT].nb_segments == 0))
+ return -1;
+
+ for (i = 0; i < vector->entries[DATA_SOFT_OUTPUT].nb_segments;
+ i++)
+ if (vector->entries[DATA_SOFT_OUTPUT].segments[i].addr == NULL)
+ return -1;
+
+ return 0;
+}
+
+static int
+check_decoder_llr_spec(struct test_bbdev_vector *vector)
+{
+ struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+
+ /* Check input LLR sign formalism specification */
+ if ((turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN) &&
+ (turbo_dec->op_flags &
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN)) {
+ printf(
+ "Both positive and negative LLR input flags were set!\n");
+ return -1;
+ }
+ if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN) &&
+ !(turbo_dec->op_flags &
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN)) {
+ printf(
+ "WARNING: input LLR sign formalism was not specified and will be set to negative LLR for '1' bit\n");
+ turbo_dec->op_flags |= RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN;
+ }
+
+ if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_SOFT_OUTPUT))
+ return 0;
+
+ /* Check output LLR sign formalism specification */
+ if ((turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT) &&
+ (turbo_dec->op_flags &
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT)) {
+ printf(
+ "Both positive and negative LLR output flags were set!\n");
+ return -1;
+ }
+ if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT) &&
+ !(turbo_dec->op_flags &
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT)) {
+ printf(
+ "WARNING: soft output LLR sign formalism was not specified and will be set to negative LLR for '1' bit\n");
+ turbo_dec->op_flags |=
+ RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT;
+ }
+
+ return 0;
+}
+
+/* checks decoder parameters */
+static int
+check_decoder(struct test_bbdev_vector *vector)
+{
+ struct rte_bbdev_op_turbo_dec *turbo_dec = &vector->turbo_dec;
+ const int mask = vector->mask;
+
+ if (check_decoder_segments(vector) < 0)
+ return -1;
+
+ if (check_decoder_llr_spec(vector) < 0)
+ return -1;
+
+ /* Check which params were set */
+ if (!(mask & TEST_BBDEV_VF_CODE_BLOCK_MODE)) {
+ printf(
+ "WARNING: code_block_mode was not specified in vector file and will be set to 1 (0 - TB Mode, 1 - CB mode)\n");
+ turbo_dec->code_block_mode = 1;
+ }
+ if (turbo_dec->code_block_mode == 0) {
+ if (!(mask & TEST_BBDEV_VF_EA))
+ printf(
+ "WARNING: ea was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_EB))
+ printf(
+ "WARNING: eb was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_K_NEG))
+ printf(
+ "WARNING: k_neg was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_K_POS))
+ printf(
+ "WARNING: k_pos was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_C_NEG))
+ printf(
+ "WARNING: c_neg was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_C)) {
+ printf(
+ "WARNING: c was not specified in vector file and will be set to 1\n");
+ turbo_dec->tb_params.c = 1;
+ }
+ if (!(mask & TEST_BBDEV_VF_CAB))
+ printf(
+ "WARNING: cab was not specified in vector file and will be set to 0\n");
+ } else {
+ if (!(mask & TEST_BBDEV_VF_E))
+ printf(
+ "WARNING: e was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_K))
+ printf(
+ "WARNING: k was not specified in vector file and will be set to 0\n");
+ }
+ if (!(mask & TEST_BBDEV_VF_RV_INDEX))
+ printf(
+ "WARNING: rv_index was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_ITER_MIN))
+ printf(
+ "WARNING: iter_min was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_ITER_MAX))
+ printf(
+ "WARNING: iter_max was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_EXPECTED_ITER_COUNT))
+ printf(
+ "WARNING: expected_iter_count was not specified in vector file and iter_count will not be validated\n");
+ if (!(mask & TEST_BBDEV_VF_EXT_SCALE))
+ printf(
+ "WARNING: ext_scale was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_OP_FLAGS)) {
+ printf(
+ "WARNING: op_flags was not specified in vector file and capabilities will not be validated\n");
+ turbo_dec->num_maps = 0;
+ } else if (!(turbo_dec->op_flags & RTE_BBDEV_TURBO_MAP_DEC) &&
+ mask & TEST_BBDEV_VF_NUM_MAPS) {
+ printf(
+ "WARNING: RTE_BBDEV_TURBO_MAP_DEC was not set in vector file and num_maps will be set to 0\n");
+ turbo_dec->num_maps = 0;
+ }
+ if (!(mask & TEST_BBDEV_VF_EXPECTED_STATUS))
+ printf(
+ "WARNING: expected_status was not specified in vector file and will be set to 0\n");
+ return 0;
+}
+
+/* checks encoder parameters */
+static int
+check_encoder(struct test_bbdev_vector *vector)
+{
+ unsigned char i;
+ const int mask = vector->mask;
+
+ if (vector->entries[DATA_INPUT].nb_segments == 0)
+ return -1;
+
+ for (i = 0; i < vector->entries[DATA_INPUT].nb_segments; i++)
+ if (vector->entries[DATA_INPUT].segments[i].addr == NULL)
+ return -1;
+
+ if (vector->entries[DATA_HARD_OUTPUT].nb_segments == 0)
+ return -1;
+
+ for (i = 0; i < vector->entries[DATA_HARD_OUTPUT].nb_segments; i++)
+ if (vector->entries[DATA_HARD_OUTPUT].segments[i].addr == NULL)
+ return -1;
+
+ if (!(mask & TEST_BBDEV_VF_CODE_BLOCK_MODE)) {
+ printf(
+ "WARNING: code_block_mode was not specified in vector file and will be set to 1\n");
+ vector->turbo_enc.code_block_mode = 1;
+ }
+ if (vector->turbo_enc.code_block_mode == 0) {
+ if (!(mask & TEST_BBDEV_VF_EA) && (vector->turbo_enc.op_flags &
+ RTE_BBDEV_TURBO_RATE_MATCH))
+ printf(
+ "WARNING: ea was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_EB) && (vector->turbo_enc.op_flags &
+ RTE_BBDEV_TURBO_RATE_MATCH))
+ printf(
+ "WARNING: eb was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_K_NEG))
+ printf(
+ "WARNING: k_neg was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_K_POS))
+ printf(
+ "WARNING: k_pos was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_C_NEG))
+ printf(
+ "WARNING: c_neg was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_C)) {
+ printf(
+ "WARNING: c was not specified in vector file and will be set to 1\n");
+ vector->turbo_enc.tb_params.c = 1;
+ }
+ if (!(mask & TEST_BBDEV_VF_CAB) && (vector->turbo_enc.op_flags &
+ RTE_BBDEV_TURBO_RATE_MATCH))
+ printf(
+ "WARNING: cab was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_NCB_NEG))
+ printf(
+ "WARNING: ncb_neg was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_NCB_POS))
+ printf(
+ "WARNING: ncb_pos was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_R))
+ printf(
+ "WARNING: r was not specified in vector file and will be set to 0\n");
+ } else {
+ if (!(mask & TEST_BBDEV_VF_E) && (vector->turbo_enc.op_flags &
+ RTE_BBDEV_TURBO_RATE_MATCH))
+ printf(
+ "WARNING: e was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_K))
+ printf(
+ "WARNING: k was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_NCB))
+ printf(
+ "WARNING: ncb was not specified in vector file and will be set to 0\n");
+ }
+ if (!(mask & TEST_BBDEV_VF_RV_INDEX))
+ printf(
+ "WARNING: rv_index was not specified in vector file and will be set to 0\n");
+ if (!(mask & TEST_BBDEV_VF_OP_FLAGS))
+ printf(
+ "WARNING: op_flags was not specified in vector file and capabilities will not be validated\n");
+ if (!(mask & TEST_BBDEV_VF_EXPECTED_STATUS))
+ printf(
+ "WARNING: expected_status was not specified in vector file and will be set to 0\n");
+
+ return 0;
+}
+
+static int
+bbdev_check_vector(struct test_bbdev_vector *vector)
+{
+ if (vector->op_type == RTE_BBDEV_OP_TURBO_DEC) {
+ if (check_decoder(vector) == -1)
+ return -1;
+ } else if (vector->op_type == RTE_BBDEV_OP_TURBO_ENC) {
+ if (check_encoder(vector) == -1)
+ return -1;
+ } else if (vector->op_type != RTE_BBDEV_OP_NONE) {
+ printf("Vector was not filled\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+int
+test_bbdev_vector_read(const char *filename,
+ struct test_bbdev_vector *vector)
+{
+ int ret = 0;
+ size_t len = 0;
+
+ FILE *fp = NULL;
+ char *line = NULL;
+ char *entry = NULL;
+
+ fp = fopen(filename, "r");
+ if (fp == NULL) {
+ printf("File %s does not exist\n", filename);
+ return -1;
+ }
+
+ while (getline(&line, &len, fp) != -1) {
+
+ /* ignore comments and new lines */
+ if (line[0] == '#' || line[0] == '/' || line[0] == '\n'
+ || line[0] == '\r')
+ continue;
+
+ trim_space(line);
+
+ /* buffer for multiline */
+ entry = realloc(entry, strlen(line) + 1);
+ if (entry == NULL) {
+ printf("Fail to realloc %zu bytes\n", strlen(line) + 1);
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ memset(entry, 0, strlen(line) + 1);
+ strncpy(entry, line, strlen(line));
+
+ /* check if entry ends with , or = */
+ if (entry[strlen(entry) - 1] == ','
+ || entry[strlen(entry) - 1] == '=') {
+ while (getline(&line, &len, fp) != -1) {
+ trim_space(line);
+
+ /* extend entry about length of new line */
+ char *entry_extended = realloc(entry,
+ strlen(line) +
+ strlen(entry) + 1);
+
+ if (entry_extended == NULL) {
+ printf("Fail to allocate %zu bytes\n",
+ strlen(line) +
+ strlen(entry) + 1);
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ entry = entry_extended;
+ strncat(entry, line, strlen(line));
+
+ if (entry[strlen(entry) - 1] != ',')
+ break;
+ }
+ }
+ ret = parse_entry(entry, vector);
+ if (ret != 0) {
+ printf("An error occurred while parsing!\n");
+ goto exit;
+ }
+ }
+ ret = bbdev_check_vector(vector);
+ if (ret != 0)
+ printf("An error occurred while checking!\n");
+
+exit:
+ fclose(fp);
+ free(line);
+ free(entry);
+
+ return ret;
+}
diff --git a/app/test-bbdev/test_bbdev_vector.h b/app/test-bbdev/test_bbdev_vector.h
new file mode 100644
index 0000000..476aae1
--- /dev/null
+++ b/app/test-bbdev/test_bbdev_vector.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef TEST_BBDEV_VECTOR_H_
+#define TEST_BBDEV_VECTOR_H_
+
+#include <rte_bbdev_op.h>
+
+/* Flags which are set when specific parameter is define in vector file */
+enum {
+ TEST_BBDEV_VF_E = (1ULL << 0),
+ TEST_BBDEV_VF_EA = (1ULL << 1),
+ TEST_BBDEV_VF_EB = (1ULL << 2),
+ TEST_BBDEV_VF_K = (1ULL << 3),
+ TEST_BBDEV_VF_K_NEG = (1ULL << 4),
+ TEST_BBDEV_VF_K_POS = (1ULL << 5),
+ TEST_BBDEV_VF_C_NEG = (1ULL << 6),
+ TEST_BBDEV_VF_C = (1ULL << 7),
+ TEST_BBDEV_VF_CAB = (1ULL << 8),
+ TEST_BBDEV_VF_RV_INDEX = (1ULL << 9),
+ TEST_BBDEV_VF_ITER_MAX = (1ULL << 10),
+ TEST_BBDEV_VF_ITER_MIN = (1ULL << 11),
+ TEST_BBDEV_VF_EXPECTED_ITER_COUNT = (1ULL << 12),
+ TEST_BBDEV_VF_EXT_SCALE = (1ULL << 13),
+ TEST_BBDEV_VF_NUM_MAPS = (1ULL << 14),
+ TEST_BBDEV_VF_NCB = (1ULL << 15),
+ TEST_BBDEV_VF_NCB_NEG = (1ULL << 16),
+ TEST_BBDEV_VF_NCB_POS = (1ULL << 17),
+ TEST_BBDEV_VF_R = (1ULL << 18),
+ TEST_BBDEV_VF_CODE_BLOCK_MODE = (1ULL << 19),
+ TEST_BBDEV_VF_OP_FLAGS = (1ULL << 20),
+ TEST_BBDEV_VF_EXPECTED_STATUS = (1ULL << 21),
+};
+
+enum op_data_type {
+ DATA_INPUT = 0,
+ DATA_SOFT_OUTPUT,
+ DATA_HARD_OUTPUT,
+ DATA_NUM_TYPES,
+};
+
+struct op_data_buf {
+ uint32_t *addr;
+ uint32_t length;
+};
+
+struct op_data_entries {
+ struct op_data_buf segments[RTE_BBDEV_MAX_CODE_BLOCKS];
+ unsigned int nb_segments;
+};
+
+struct test_bbdev_vector {
+ enum rte_bbdev_op_type op_type;
+ int expected_status;
+ int mask;
+ union {
+ struct rte_bbdev_op_turbo_dec turbo_dec;
+ struct rte_bbdev_op_turbo_enc turbo_enc;
+ };
+ /* Additional storage for op data entries */
+ struct op_data_entries entries[DATA_NUM_TYPES];
+};
+
+/* fills test vector parameters based on test file */
+int
+test_bbdev_vector_read(const char *filename,
+ struct test_bbdev_vector *vector);
+
+
+#endif /* TEST_BBDEV_VECTOR_H_ */
diff --git a/app/test-bbdev/test_vectors/bbdev_vector_null.data b/app/test-bbdev/test_vectors/bbdev_vector_null.data
new file mode 100644
index 0000000..c9a9abe
--- /dev/null
+++ b/app/test-bbdev/test_vectors/bbdev_vector_null.data
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+op_type =
+RTE_BBDEV_OP_NONE
\ No newline at end of file
diff --git a/app/test-bbdev/test_vectors/bbdev_vector_td_default.data b/app/test-bbdev/test_vectors/bbdev_vector_td_default.data
new file mode 100644
index 0000000..b5c3027
--- /dev/null
+++ b/app/test-bbdev/test_vectors/bbdev_vector_td_default.data
@@ -0,0 +1,54 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+op_type =
+RTE_BBDEV_OP_TURBO_DEC
+
+input0 =
+0x7f007f00, 0x7f817f00, 0x767f8100, 0x817f8100, 0x81008100, 0x7f818100, 0x81817f00, 0x7f818100,
+0x86007f00, 0x7f818100, 0x887f8100, 0x81815200, 0x81008100, 0x817f7f00, 0x7f7f8100, 0x9e817f00,
+0x7f7f0000, 0xb97f0000, 0xa7810000, 0x7f7f4a7f, 0x7f810000, 0x7f7f7f7f, 0x81720000, 0x40658181,
+0x84810000, 0x817f0000, 0x81810000, 0x7f818181, 0x7f810000, 0x81815a81, 0x817f0000, 0x7a867f7b,
+0x817f0000, 0x6b7f0000, 0x7f810000, 0x81818181, 0x817f0000, 0x7f7f817f, 0x7f7f0000, 0xab7f4f7f,
+0x817f0000, 0x817f6c00, 0x81810000, 0x817f8181, 0x7f810000, 0x81816981, 0x7f7f0000, 0x007f8181
+
+hard_output0 =
+0xa7d6732e, 0x61
+
+soft_output0 =
+0x7f7f7f7f, 0x81817f7f, 0x7f817f81, 0x817f7f81, 0x81817f81, 0x81817f81, 0x8181817f, 0x7f81817f,
+0x7f81817f, 0x7f817f7f, 0x81817f7f, 0x817f8181, 0x81818181, 0x817f7f7f, 0x7f818181, 0x817f817f,
+0x81818181, 0x81817f7f, 0x7f817f81, 0x7f81817f, 0x817f7f7f, 0x817f7f7f, 0x7f81817f, 0x817f817f,
+0x81817f7f, 0x81817f7f, 0x81817f7f, 0x7f817f7f, 0x817f7f81, 0x7f7f8181, 0x81817f81, 0x817f7f7f,
+0x7f7f8181
+
+e =
+17280
+
+k =
+40
+
+rv_index =
+1
+
+iter_max =
+8
+
+iter_min =
+4
+
+expected_iter_count =
+8
+
+ext_scale =
+15
+
+num_maps =
+0
+
+op_flags =
+RTE_BBDEV_TURBO_SOFT_OUTPUT, RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE, RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN,
+RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT
+
+expected_status =
+OK
diff --git a/app/test-bbdev/test_vectors/bbdev_vector_te_default.data b/app/test-bbdev/test_vectors/bbdev_vector_te_default.data
new file mode 100644
index 0000000..883a76c
--- /dev/null
+++ b/app/test-bbdev/test_vectors/bbdev_vector_te_default.data
@@ -0,0 +1,33 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+op_type =
+RTE_BBDEV_OP_TURBO_ENC
+
+input0 =
+0x11d2bcac, 0x4d
+
+output0 =
+0xd2399179, 0x640eb999, 0x2cbaf577, 0xaf224ae2, 0x9d139927, 0xe6909b29, 0xa25b7f47, 0x2aa224ce,
+0x79f2
+
+e =
+272
+
+k =
+40
+
+ncb =
+192
+
+rv_index =
+0
+
+code_block_mode =
+1
+
+op_flags =
+RTE_BBDEV_TURBO_RATE_MATCH
+
+expected_status =
+OK
diff --git a/config/common_base b/config/common_base
index 17d96a4..2044626 100644
--- a/config/common_base
+++ b/config/common_base
@@ -821,6 +821,11 @@ CONFIG_RTE_APP_TEST=y
CONFIG_RTE_APP_TEST_RESOURCE_TAR=n
#
+# Compile the bbdev test application
+#
+CONFIG_RTE_TEST_BBDEV=y
+
+#
# Compile the PMD test application
#
CONFIG_RTE_TEST_PMD=y
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index c9133ec..a390fe7 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -41,3 +41,4 @@ DPDK Tools User Guides
devbind
cryptoperf
testeventdev
+ testbbdev
diff --git a/doc/guides/tools/testbbdev.rst b/doc/guides/tools/testbbdev.rst
new file mode 100644
index 0000000..c7aac49
--- /dev/null
+++ b/doc/guides/tools/testbbdev.rst
@@ -0,0 +1,538 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2017 Intel Corporation
+
+dpdk-test-bbdev Application
+===========================
+
+The ``dpdk-test-bbdev`` tool is a Data Plane Development Kit (DPDK) utility that
+allows measuring performance parameters of PMDs available in the bbdev framework.
+Available tests available for execution are: latency, throughput, validation and
+sanity tests. Execution of tests can be customized using various parameters
+passed to a python running script.
+
+Compiling the Application
+-------------------------
+
+**Step 1: PMD setting**
+
+The ``dpdk-test-bbdev`` tool depends on crypto device drivers PMD which
+are disabled by default in the build configuration file ``common_base``.
+The bbdevice drivers PMD which should be tested can be enabled by setting
+
+ ``CONFIG_RTE_LIBRTE_PMD_<name>=y``
+
+Setting example for (*turbo_sw*) PMD
+
+ ``CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=y``
+
+**Step 2: Build the application**
+
+Execute the ``dpdk-setup.sh`` script to build the DPDK library together with the
+``dpdk-test-bbdev`` application.
+
+Initially, the user must select a DPDK target to choose the correct target type
+and compiler options to use when building the libraries.
+The user must have all libraries, modules, updates and compilers installed
+in the system prior to this, as described in the earlier chapters in this
+Getting Started Guide.
+
+Running the Application
+-----------------------
+
+The tool application has a number of command line options:
+
+.. code-block:: console
+
+ python test-bbdev.py [-h] [-p TESTAPP_PATH] [-e EAL_PARAMS] [-t TIMEOUT]
+ [-c TEST_CASE [TEST_CASE ...]]
+ [-v TEST_VECTOR [TEST_VECTOR...]] [-n NUM_OPS]
+ [-b BURST_SIZE [BURST_SIZE ...]] [-l NUM_LCORES]
+
+command-line Options
+~~~~~~~~~~~~~~~~~~~~
+
+The following are the command-line options:
+
+``-h, --help``
+ Shows help message and exit.
+
+``-p TESTAPP_PATH, --testapp_path TESTAPP_PATH``
+ Indicates the path to the bbdev test app. If not specified path is set based
+ on *$RTE_SDK* environment variable concatenated with "*/build/app/testbbdev*".
+
+``-e EAL_PARAMS, --eal_params EAL_PARAMS``
+ Specifies EAL arguments which are passed to the test app. For more details,
+ refer to DPDK documentation at http://dpdk.org/doc.
+
+``-t TIMEOUT, --timeout TIMEOUT``
+ Specifies timeout in seconds. If not specified timeout is set to 300 seconds.
+
+``-c TEST_CASE [TEST_CASE ...], --test_cases TEST_CASE [TEST_CASE ...]``
+ Defines test cases to run. If not specified all available tests are run.
+
+ The following tests can be run:
+ * unittest
+ Small unit tests witch check basic functionality of bbdev library.
+ * latency
+ Test calculates three latency metrics:
+ * offload_latency_tc
+ measures the cost of offloading enqueue and dequeue operations.
+ * offload_latency_empty_q_tc
+ measures the cost of offloading a dequeue operation from an empty queue.
+ checks how long last dequeueing if there is no operations to dequeue
+ * operation_latency_tc
+ measures the time difference from the first attempt to enqueue till the
+ first successful dequeue.
+ * validation
+ Test do enqueue on given vector and compare output after dequeueing.
+ * throughput
+ Test measures the achieved throughput on the available lcores.
+ Results are printed in million operations per second and million bits per second.
+ * interrupt
+ The same test as 'throughput' but uses interrupts instead of PMD to perform
+ the dequeue.
+
+ **Example usage:**
+
+ ``./test-bbdev.py -c validation``
+ Runs validation test suite
+
+ ``./test-bbdev.py -c latency throughput``
+ Runs latency and throughput test suites
+
+``-v TEST_VECTOR [TEST_VECTOR ...], --test_vector TEST_VECTOR [TEST_VECTOR ...]``
+ Specifies paths to the test vector files. If not specified path is set based
+ on *$RTE_SDK* environment variable concatenated with
+ "*/app/test-bbdev/test_vectors/bbdev_vector_null.data*" and indicates default
+ data file.
+
+ **Example usage:**
+
+ ``./test-bbdev.py -v app/test-bbdev/test_vectors/bbdev_vector_td_test1.data``
+ Fills vector based on bbdev_vector_td_test1.data file and runs all tests
+
+ ``./test-bbdev.py -v bbdev_vector_td_test1.data bbdev_vector_te_test2.data``
+ The bbdev test app is executed twice. First time vector is filled based on
+ *bbdev_vector_td_test1.data* file and second time based on
+ *bbdev_vector_te_test2.data* file. For both executions all tests are run.
+
+``-n NUM_OPS, --num_ops NUM_OPS``
+ Specifies number of operations to process on device. If not specified num_ops
+ is set to 32 operations.
+
+``-l NUM_LCORES, --num_lcores NUM_LCORES``
+ Specifies number of lcores to run. If not specified num_lcores is set
+ according to value from RTE configuration (EAL coremask)
+
+``-b BURST_SIZE [BURST_SIZE ...], --burst-size BURST_SIZE [BURST_SIZE ...]``
+ Specifies operations enqueue/dequeue burst size. If not specified burst_size is
+ set to 32. Maximum is 512.
+
+
+Parameter globbing
+~~~~~~~~~~~~~~~~~~
+
+Thanks to the globbing functionality in python test-bbdev.py script allows to
+run tests with different set of vector files without giving all of them explicitly.
+
+**Example usage:**
+
+.. code-block:: console
+
+ ./test-bbdev.py -v app/test-bbdev/test_vectors/bbdev_vector_*.data
+
+It runs all tests with following vectors:
+
+- ``bbdev_vector_null.data``
+
+- ``bbdev_vector_td_default.data``
+
+- ``bbdev_vector_te_default.data``
+
+
+.. code-block:: console
+
+ ./test-bbdev.py -v app/test-bbdev/test_vectors/bbdev_vector_t?_default.data
+
+It runs all tests with "default" vectors:
+
+- ``bbdev_vector_te_default.data``
+
+- ``bbdev_vector_td_default.data``
+
+
+Running Tests
+-------------
+
+Shortened tree of isg_cid-wireless_dpdk_ae with dpdk compiled for
+x86_64-native-linuxapp-icc target:
+
+::
+
+ |-- app
+ |-- test-bbdev
+ |-- test_vectors
+ |-- bbdev_vector_null.data
+ |-- bbdev_vector_td_default.data
+ |-- bbdev_vector_te_default.data
+
+ |-- x86_64-native-linuxapp-icc
+ |-- app
+ |-- testbbdev
+
+All bbdev devices
+~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ ./test-bbdev.py -p ../../x86_64-native-linuxapp-icc/app/testbbdev
+ -v ./test_vectors/bbdev_vector_td_default.data
+
+It runs all available tests using the test vector filled based on
+*bbdev_vector_td_default.data* file.
+By default number of operations to process on device is set to 32, timeout is
+set to 300s and operations enqueue/dequeue burst size is set to 32.
+Moreover a bbdev (*bbdev_null*) device will be created.
+
+bbdev turbo_sw device
+~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ ./test-bbdev.py -p ../../x86_64-native-linuxapp-icc/app/testbbdev
+ -e="--vdev=turbo_sw" -t 120 -c validation
+ -v ./test_vectors/bbdev_vector_t?_default.data -n 64 -b 8 32
+
+It runs **validation** test for each vector file that matches the given pattern.
+Number of operations to process on device is set to 64 and operations timeout is
+set to 120s and enqueue/dequeue burst size is set to 8 and to 32.
+Moreover a bbdev (*turbo_sw*) device will be created.
+
+
+bbdev null device
+~~~~~~~~~~~~~~~~~
+
+Executing bbdev null device with *bbdev_vector_null.data* helps in measuring the
+overhead introduced by the bbdev framework.
+
+.. code-block:: console
+
+ ./test-bbdev.py -e="--vdev=bbdev_null0"
+ -v ./test_vectors/bbdev_vector_null.data
+
+**Note:**
+
+bbdev_null device does not have to be defined explicitly as it is created by default.
+
+
+
+Test Vector files
+=================
+
+Test Vector files contain the data which is used to set turbo decoder/encoder
+parameters and buffers for validation purpose. New test vector files should be
+stored in ``app/test-bbdev/test_vectors/`` directory. Detailed description of
+the syntax of the test vector files is in the following section.
+
+
+Basic principles for test vector files
+--------------------------------------
+Line started with ``#`` is treated as a comment and is ignored.
+
+If variable is a chain of values, values should be separated by a comma. If
+assignment is split into several lines, each line (except the last one) has to
+be ended with a comma.
+There is no comma after last value in last line. Correct assignment should
+look like the following:
+
+.. parsed-literal::
+
+ variable =
+ value, value, value, value,
+ value, value
+
+In case where variable is a single value correct assignment looks like the
+following:
+
+.. parsed-literal::
+
+ variable =
+ value
+
+Length of chain variable is calculated by parser. Can not be defined
+explicitly.
+
+Variable op_type has to be defined as a first variable in file. It specifies
+what type of operations will be executed. For decoder op_type has to be set to
+``RTE_BBDEV_OP_TURBO_DEC`` and for encoder to ``RTE_BBDEV_OP_TURBO_ENC``.
+
+Full details of the meaning and valid values for the below fields are
+documented in *rte_bbdev_op.h*
+
+
+Turbo decoder test vectors template
+-----------------------------------
+
+For turbo decoder it has to be always set to ``RTE_BBDEV_OP_TURBO_DEC``
+
+.. parsed-literal::
+
+ op_type =
+ RTE_BBDEV_OP_TURBO_DEC
+
+Chain of uint32_t values. Note that it is possible to define more than one
+input/output entries which will result in chaining two or more data structures
+for *segmented Transport Blocks*
+
+.. parsed-literal::
+
+ input0 =
+ 0x00000000, 0x7f817f00, 0x7f7f8100, 0x817f8100, 0x81008100, 0x7f818100, 0x81817f00, 0x7f818100,
+ 0x81007f00, 0x7f818100, 0x817f8100, 0x81817f00, 0x81008100, 0x817f7f00, 0x7f7f8100, 0x81817f00
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ input1 =
+ 0x7f7f0000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ input2 =
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ hard_output0 =
+ 0xa7d6732e
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ hard_output1 =
+ 0xa61
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ soft_output0 =
+ 0x817f817f, 0x7f817f7f, 0x81818181, 0x817f7f81, 0x7f818181, 0x8181817f, 0x817f817f, 0x8181817f
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ soft_output1 =
+ 0x817f7f81, 0x7f7f7f81, 0x7f7f8181
+
+uint32_t value
+
+.. parsed-literal::
+
+ e =
+ 44
+
+uint16_t value
+
+.. parsed-literal::
+
+ k =
+ 40
+
+uint8_t value
+
+.. parsed-literal::
+
+ rv_index =
+ 0
+
+uint8_t value
+
+.. parsed-literal::
+
+ iter_max =
+ 8
+
+uint8_t value
+
+.. parsed-literal::
+
+ iter_min =
+ 4
+
+uint8_t value
+
+.. parsed-literal::
+
+ expected_iter_count =
+ 8
+
+uint8_t value
+
+.. parsed-literal::
+
+ ext_scale =
+ 15
+
+uint8_t value
+
+.. parsed-literal::
+
+ num_maps =
+ 0
+
+Chain of flags for turbo decoder operation. Following flags can be used:
+
+- ``RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE``
+
+- ``RTE_BBDEV_TURBO_CRC_TYPE_24B``
+
+- ``RTE_BBDEV_TURBO_EQUALIZER``
+
+- ``RTE_BBDEV_TURBO_SOFT_OUT_SATURATE``
+
+- ``RTE_BBDEV_TURBO_HALF_ITERATION_EVEN``
+
+- ``RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH``
+
+- ``RTE_BBDEV_TURBO_SOFT_OUTPUT``
+
+- ``RTE_BBDEV_TURBO_EARLY_TERMINATION``
+
+- ``RTE_BBDEV_TURBO_DEC_INTERRUPTS``
+
+- ``RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN``
+
+- ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN``
+
+- ``RTE_BBDEV_TURBO_POS_LLR_1_BIT_SOFT_OUT``
+
+- ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT``
+
+- ``RTE_BBDEV_TURBO_MAP_DEC``
+
+Example:
+
+ .. parsed-literal::
+
+ op_flags =
+ RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE, RTE_BBDEV_TURBO_EQUALIZER,
+ RTE_BBDEV_TURBO_SOFT_OUTPUT
+
+Chain of operation statuses that are expected after operation is performed.
+Following statuses can be used:
+
+- ``DMA``
+
+- ``FCW``
+
+- ``CRC``
+
+- ``OK``
+
+``OK`` means no errors are expected. Cannot be used with other values.
+
+.. parsed-literal::
+
+ expected_status =
+ FCW, CRC
+
+
+Turbo encoder test vectors template
+-----------------------------------
+
+For turbo encoder it has to be always set to ``RTE_BBDEV_OP_TURBO_ENC``
+
+.. parsed-literal::
+
+ op_type =
+ RTE_BBDEV_OP_TURBO_ENC
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ input0 =
+ 0x11d2bcac, 0x4d
+
+Chain of uint32_t values
+
+.. parsed-literal::
+
+ output0 =
+ 0xd2399179, 0x640eb999, 0x2cbaf577, 0xaf224ae2, 0x9d139927, 0xe6909b29,
+ 0xa25b7f47, 0x2aa224ce, 0x79f2
+
+uint32_t value
+
+.. parsed-literal::
+
+ e =
+ 272
+
+uint16_t value
+
+.. parsed-literal::
+
+ k =
+ 40
+
+uint16_t value
+
+.. parsed-literal::
+
+ ncb =
+ 192
+
+uint8_t value
+
+.. parsed-literal::
+
+ rv_index =
+ 0
+
+Chain of flags for turbo encoder operation. Following flags can be used:
+
+- ``RTE_BBDEV_TURBO_RV_INDEX_BYPASS``
+
+- ``RTE_BBDEV_TURBO_RATE_MATCH``
+
+- ``RTE_BBDEV_TURBO_CRC_24B_ATTACH``
+
+- ``RTE_BBDEV_TURBO_CRC_24A_ATTACH``
+
+- ``RTE_BBDEV_TURBO_ENC_SCATTER_GATHER``
+
+``RTE_BBDEV_TURBO_ENC_SCATTER_GATHER`` is used to indicate the parser to
+force the input data to be memory split and formed as a segmented mbuf.
+
+
+.. parsed-literal::
+
+ op_flags =
+ RTE_BBDEV_TURBO_RATE_MATCH
+
+Chain of operation statuses that are expected after operation is performed.
+Following statuses can be used:
+
+- ``DMA``
+
+- ``FCW``
+
+- ``OK``
+
+``OK`` means no errors are expected. Cannot be used with other values.
+
+.. parsed-literal::
+
+ expected_status =
+ OK
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v5 5/5] bbdev: sample app
2018-01-11 19:23 [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib Amr Mokhtar
` (2 preceding siblings ...)
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 4/5] bbdev: test applications Amr Mokhtar
@ 2018-01-11 19:23 ` Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library Amr Mokhtar
4 siblings, 0 replies; 11+ messages in thread
From: Amr Mokhtar @ 2018-01-11 19:23 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara, Amr Mokhtar
- sample application performing a loop-back over ethernet using
a bbbdev device
- 'turbo_sw' PMD must be enabled for the app to be functional
- a packet is received on an ethdev port -> enqueued for baseband
encode operation -> dequeued -> enqueued for baseband decode
operation-> dequeued -> compared with original signal -> looped-back
to the ethdev port
Signed-off-by: Amr Mokhtar <amr.mokhtar@intel.com>
---
MAINTAINERS | 2 +
doc/guides/sample_app_ug/bbdev_app.rst | 132 ++++
doc/guides/sample_app_ug/index.rst | 1 +
examples/Makefile | 1 +
examples/bbdev_app/Makefile | 22 +
examples/bbdev_app/main.c | 1144 ++++++++++++++++++++++++++++++++
6 files changed, 1302 insertions(+)
create mode 100644 doc/guides/sample_app_ug/bbdev_app.rst
create mode 100644 examples/bbdev_app/Makefile
create mode 100644 examples/bbdev_app/main.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 81b883e..b0e0c30 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -276,8 +276,10 @@ M: Amr Mokhtar <amr.mokhtar@intel.com>
F: lib/librte_bbdev/
F: drivers/bbdev/
F: app/test-bbdev/
+F: examples/bbdev_app/
F: doc/guides/bbdevs/
F: doc/guides/prog_guide/bbdev.rst
+F: doc/guides/sample_app_ug/bbdev_app.rst
F: doc/guides/tools/testbbdev.rst
Security API - EXPERIMENTAL
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
new file mode 100644
index 0000000..f17125d
--- /dev/null
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -0,0 +1,132 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2017 Intel Corporation
+
+.. bbdev_app:
+
+Loop-back Sample Application using Baseband Device (bbdev)
+==========================================================
+
+The baseband sample application is a simple example of packet processing using
+the Data Plane Development Kit (DPDK) for baseband workloads using Wireless
+Device abstraction library.
+
+Overview
+--------
+
+The Baseband device sample application performs a loop-back operation using a
+baseband device capable of transceiving data packets.
+A packet is received on an ethernet port -> enqueued for downlink baseband
+operation -> dequeued from the downlink baseband device -> enqueued for uplink
+baseband operation -> dequeued from the baseband device -> then the received
+packet is compared with the baseband operations output. Then it's looped back to
+the ethernet port.
+
+* The MAC header is preserved in the packet
+
+Limitations
+-----------
+
+* Only one baseband device and one ethernet port can be used.
+
+Compiling the Application
+-------------------------
+
+#. DPDK needs to be built with ``turbo_sw`` PMD driver enabled along with
+ ``FLEXRAN SDK`` Libraries. Refer to *SW Turbo Poll Mode Driver*
+ documentation for more details on this.
+
+#. Go to the example directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}/examples/bbdev_app
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application accepts a number of command line options:
+
+.. code-block:: console
+
+ $ ./build/bbdev [EAL options] -- [-e ENCODING_CORES] [-d DECODING_CORES] /
+ [-p ETH_PORT_ID] [-b BBDEV_ID]
+
+where:
+
+* ``e ENCODING_CORES``: hexmask for encoding lcored (default = 0x2)
+* ``d DECODING_CORES``: hexmask for decoding lcores (default = 0x4)
+* ``p ETH_PORT_ID``: ethernet port ID (default = 0)
+* ``b BBDEV_ID``: BBDev ID (default = 0)
+
+The application requires that baseband devices is capable of performing
+the specified baseband operation are available on application initialization.
+This means that HW baseband device/s must be bound to a DPDK driver or
+a SW baseband device/s (virtual BBdev) must be created (using --vdev).
+
+To run the application in linuxapp environment with the turbo_sw baseband device
+using the whitelisted port running on 1 encoding lcore and 1 decoding lcore
+issue the command:
+
+.. code-block:: console
+
+ $ ./build/bbdev --vdev='turbo_sw' -w <NIC0PCIADDR> -c 0x38 --socket-mem=2,2 \
+ --file-prefix=bbdev -- -e 0x10 -d 0x20
+
+where, NIC0PCIADDR is the PCI addresse of the Rx port
+
+This command creates one virtual bbdev devices ``turbo_sw`` where the device
+gets linked to a corresponding ethernet port as whitelisted by the parameter -w.
+3 cores are allocated to the application, and assigned as:
+
+ - core 3 is the master and used to print the stats live on screen,
+
+ - core 4 is the encoding lcore performing Rx and Turbo Encode operations
+
+ - core 5 is the downlink lcore performing Turbo Decode, validation and Tx
+ operations
+
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+Using Packet Generator with baseband device sample application
+--------------------------------------------------------------
+
+To allow the bbdev sample app to do the loopback, an influx of traffic is required.
+This can be done by using DPDK Pktgen to burst traffic on two ethernet ports, and
+it will print the transmitted along with the looped-back traffic on Rx ports.
+Executing the command below will generate traffic on the two whitelisted ethernet
+ports.
+
+.. code-block:: console
+
+ $ ./pktgen-3.4.0/app/x86_64-native-linuxapp-gcc/pktgen -c 0x3 \
+ --socket-mem=1,1 --file-prefix=pg -w <NIC1PCIADDR> -- -m 1.0 -P
+
+where:
+
+* ``-c COREMASK``: A hexadecimal bitmask of cores to run on
+* ``--socket-mem``: Memory to allocate on specific sockets (use comma separated values)
+* ``--file-prefix``: Prefix for hugepage filenames
+* ``-w <NIC1PCIADDR>``: Add a PCI device in white list. The argument format is <[domain:]bus:devid.func>.
+* ``-m <string>``: Matrix for mapping ports to logical cores.
+* ``-P``: PROMISCUOUS mode
+
+
+Refer to *The Pktgen Application* documents for general information on running
+Pktgen with DPDK applications.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index db68ef7..3d04cf7 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -81,6 +81,7 @@ Sample Applications User Guides
ptpclient
performance_thread
ipsec_secgw
+ bbdev_app
**Figures**
diff --git a/examples/Makefile b/examples/Makefile
index 9f7974a..26bf256 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -61,6 +61,7 @@ ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += l2fwd-crypto
+DIRS-$(CONFIG_RTE_LIBRTE_BBDEV) += bbdev_app
DIRS-$(CONFIG_RTE_LIBRTE_JOBSTATS) += l2fwd-jobstats
DIRS-y += l2fwd-keepalive
DIRS-y += l2fwd-keepalive/ka-agent
diff --git a/examples/bbdev_app/Makefile b/examples/bbdev_app/Makefile
new file mode 100644
index 0000000..1b7ec63
--- /dev/null
+++ b/examples/bbdev_app/Makefile
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = bbdev
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
new file mode 100644
index 0000000..2e5bd8c
--- /dev/null
+++ b/examples/bbdev_app/main.c
@@ -0,0 +1,1144 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/unistd.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <math.h>
+#include <assert.h>
+#include <getopt.h>
+#include <signal.h>
+
+#include "rte_atomic.h"
+#include "rte_common.h"
+#include "rte_eal.h"
+#include "rte_cycles.h"
+#include "rte_ether.h"
+#include "rte_ethdev.h"
+#include "rte_ip.h"
+#include "rte_lcore.h"
+#include "rte_malloc.h"
+#include "rte_mbuf.h"
+#include "rte_memory.h"
+#include "rte_mempool.h"
+#include "rte_log.h"
+#include "rte_bbdev.h"
+#include "rte_bbdev_op.h"
+
+/* LLR values - negative value for '1' bit */
+#define LLR_1_BIT 0x81
+#define LLR_0_BIT 0x7F
+
+#define MAX_PKT_BURST 32
+#define NB_MBUF 8191
+#define MEMPOOL_CACHE_SIZE 256
+
+/* Hardcoded K value */
+#define K 40
+#define NCB (3 * RTE_ALIGN_CEIL(K + 4, 32))
+
+#define CRC_24B_LEN 3
+
+/* Configurable number of RX/TX ring descriptors */
+#define RTE_TEST_RX_DESC_DEFAULT 128
+#define RTE_TEST_TX_DESC_DEFAULT 512
+
+#define BBDEV_ASSERT(a) do { \
+ if (!(a)) { \
+ usage(prgname); \
+ return -1; \
+ } \
+} while (0)
+
+static const struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_NONE,
+ .max_rx_pkt_len = ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ .header_split = 0, /**< Header Split disabled */
+ .hw_ip_checksum = 0, /**< IP checksum offload disabled */
+ .hw_vlan_filter = 0, /**< VLAN filtering disabled */
+ .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
+ .hw_strip_crc = 0, /**< CRC stripped by hardware */
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+};
+
+struct rte_bbdev_op_turbo_enc def_op_enc = {
+ /* These values are arbitrarily put, and does not map to the real
+ * values for the data received from ethdev ports
+ */
+ .rv_index = 0,
+ .code_block_mode = 1,
+ .cb_params = {
+ .k = K,
+ },
+ .op_flags = RTE_BBDEV_TURBO_CRC_24A_ATTACH
+};
+
+struct rte_bbdev_op_turbo_dec def_op_dec = {
+ /* These values are arbitrarily put, and does not map to the real
+ * values for the data received from ethdev ports
+ */
+ .code_block_mode = 1,
+ .cb_params = {
+ .k = K,
+ },
+ .rv_index = 0,
+ .iter_max = 8,
+ .iter_min = 4,
+ .ext_scale = 15,
+ .num_maps = 0,
+ .op_flags = RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN
+};
+
+struct app_config_params {
+ /* Placeholders for app params */
+ uint16_t port_id;
+ uint16_t bbdev_id;
+ uint64_t enc_core_mask;
+ uint64_t dec_core_mask;
+
+ /* Values filled during init time */
+ uint16_t enc_queue_ids[RTE_MAX_LCORE];
+ uint16_t dec_queue_ids[RTE_MAX_LCORE];
+ uint16_t num_enc_cores;
+ uint16_t num_dec_cores;
+};
+
+struct lcore_statistics {
+ unsigned int enqueued;
+ unsigned int dequeued;
+ unsigned int rx_lost_packets;
+ unsigned int enc_to_dec_lost_packets;
+ unsigned int tx_lost_packets;
+} __rte_cache_aligned;
+
+/** each lcore configuration */
+struct lcore_conf {
+ uint64_t core_type;
+
+ unsigned int port_id;
+ unsigned int rx_queue_id;
+ unsigned int tx_queue_id;
+
+ unsigned int bbdev_id;
+ unsigned int enc_queue_id;
+ unsigned int dec_queue_id;
+
+ uint8_t llr_temp_buf[NCB];
+
+ struct rte_mempool *bbdev_dec_op_pool;
+ struct rte_mempool *bbdev_enc_op_pool;
+ struct rte_mempool *enc_out_pool;
+ struct rte_ring *enc_to_dec_ring;
+
+ struct lcore_statistics *lcore_stats;
+} __rte_cache_aligned;
+
+struct stats_lcore_params {
+ struct lcore_conf *lconf;
+ struct app_config_params *app_params;
+};
+
+
+static const struct app_config_params def_app_config = {
+ .port_id = 0,
+ .bbdev_id = 0,
+ .enc_core_mask = 0x2,
+ .dec_core_mask = 0x4,
+ .num_enc_cores = 1,
+ .num_dec_cores = 1,
+};
+
+static rte_atomic16_t global_exit_flag;
+
+/* display usage */
+static inline void
+usage(const char *prgname)
+{
+ printf("%s [EAL options] "
+ " --\n"
+ " --enc_cores - number of encoding cores (default = 0x2)\n"
+ " --dec_cores - number of decoding cores (default = 0x4)\n"
+ " --port_id - Ethernet port ID (default = 0)\n"
+ " --bbdev_id - BBDev ID (default = 0)\n"
+ "\n", prgname);
+}
+
+/* parse core mask */
+static inline
+uint16_t bbdev_parse_mask(const char *mask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(mask, &end, 16);
+ if ((mask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+
+ return pm;
+}
+
+/* parse core mask */
+static inline
+uint16_t bbdev_parse_number(const char *mask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(mask, &end, 10);
+ if ((mask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+
+ return pm;
+}
+
+static int
+bbdev_parse_args(int argc, char **argv,
+ struct app_config_params *app_params)
+{
+ int optind = 0;
+ int opt;
+ int opt_indx = 0;
+ char *prgname = argv[0];
+
+ static struct option lgopts[] = {
+ { "enc_core_mask", required_argument, 0, 'e' },
+ { "dec_core_mask", required_argument, 0, 'd' },
+ { "port_id", required_argument, 0, 'p' },
+ { "bbdev_id", required_argument, 0, 'b' },
+ { NULL, 0, 0, 0 }
+ };
+
+ BBDEV_ASSERT(argc != 0);
+ BBDEV_ASSERT(argv != NULL);
+ BBDEV_ASSERT(app_params != NULL);
+
+ while ((opt = getopt_long(argc, argv, "e:d:p:b:", lgopts, &opt_indx)) !=
+ EOF) {
+ switch (opt) {
+ case 'e':
+ app_params->enc_core_mask =
+ bbdev_parse_mask(optarg);
+ if (app_params->enc_core_mask == 0) {
+ usage(prgname);
+ return -1;
+ }
+ app_params->num_enc_cores =
+ __builtin_popcount(app_params->enc_core_mask);
+ break;
+
+ case 'd':
+ app_params->dec_core_mask =
+ bbdev_parse_mask(optarg);
+ if (app_params->dec_core_mask == 0) {
+ usage(prgname);
+ return -1;
+ }
+ app_params->num_dec_cores =
+ __builtin_popcount(app_params->dec_core_mask);
+ break;
+
+ case 'p':
+ app_params->port_id = bbdev_parse_number(optarg);
+ break;
+
+ case 'b':
+ app_params->bbdev_id = bbdev_parse_number(optarg);
+ break;
+
+ default:
+ usage(prgname);
+ return -1;
+ }
+ }
+ optind = 0;
+ return optind;
+}
+
+static void
+signal_handler(int signum)
+{
+ printf("\nSignal %d received\n", signum);
+ rte_atomic16_set(&global_exit_flag, 1);
+}
+
+static void
+print_mac(unsigned int portid, struct ether_addr *bbdev_ports_eth_address)
+{
+ printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ (unsigned int) portid,
+ bbdev_ports_eth_address[portid].addr_bytes[0],
+ bbdev_ports_eth_address[portid].addr_bytes[1],
+ bbdev_ports_eth_address[portid].addr_bytes[2],
+ bbdev_ports_eth_address[portid].addr_bytes[3],
+ bbdev_ports_eth_address[portid].addr_bytes[4],
+ bbdev_ports_eth_address[portid].addr_bytes[5]);
+}
+
+static inline void
+pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int nb_to_free)
+{
+ unsigned int i;
+ for (i = 0; i < nb_to_free; ++i)
+ rte_pktmbuf_free(mbufs[i]);
+}
+
+static inline void
+pktmbuf_userdata_free_bulk(struct rte_mbuf **mbufs, unsigned int nb_to_free)
+{
+ unsigned int i;
+ for (i = 0; i < nb_to_free; ++i) {
+ struct rte_mbuf *rx_pkt = mbufs[i]->userdata;
+ rte_pktmbuf_free(rx_pkt);
+ rte_pktmbuf_free(mbufs[i]);
+ }
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static int
+check_port_link_status(uint16_t port_id)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint8_t count;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status.");
+ fflush(stdout);
+
+ for (count = 0; count <= MAX_CHECK_TIME &&
+ !rte_atomic16_read(&global_exit_flag); count++) {
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(port_id, &link);
+
+ if (link.link_status) {
+ const char *dp = (link.link_duplex ==
+ ETH_LINK_FULL_DUPLEX) ?
+ "full-duplex" : "half-duplex";
+ printf("\nPort %u Link Up - speed %u Mbps - %s\n",
+ port_id, link.link_speed, dp);
+ return 0;
+ }
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ printf("\nPort %d Link Down\n", port_id);
+ return 0;
+}
+
+static inline void
+add_ether_hdr(struct rte_mbuf *pkt_src, struct rte_mbuf *pkt_dst)
+{
+ struct ether_hdr *eth_from;
+ struct ether_hdr *eth_to;
+
+ eth_from = rte_pktmbuf_mtod(pkt_src, struct ether_hdr *);
+ eth_to = rte_pktmbuf_mtod(pkt_dst, struct ether_hdr *);
+
+ /* copy header */
+ rte_memcpy(eth_to, eth_from, sizeof(struct ether_hdr));
+}
+
+static inline void
+add_awgn(struct rte_mbuf **mbufs, uint16_t num_pkts)
+{
+ RTE_SET_USED(mbufs);
+ RTE_SET_USED(num_pkts);
+}
+
+/* Encoder output to Decoder input adapter. The Decoder accepts only soft input
+ * so each bit of the encoder output must be translated into one byte of LLR. If
+ * Sub-block Deinterleaver is bypassed, which is the case, the padding bytes
+ * must additionally be insterted at the end of each sub-block.
+ */
+static inline void
+transform_enc_out_dec_in(struct rte_mbuf **mbufs, uint8_t *temp_buf,
+ uint16_t num_pkts, uint16_t k)
+{
+ uint16_t i, l, j;
+ uint16_t start_bit_idx;
+ uint16_t out_idx;
+ uint16_t d = k + 4;
+ uint16_t kpi = RTE_ALIGN_CEIL(d, 32);
+ uint16_t nd = kpi - d;
+ uint16_t ncb = 3 * kpi;
+
+ for (i = 0; i < num_pkts; ++i) {
+ uint16_t pkt_data_len = rte_pktmbuf_data_len(mbufs[i]) -
+ sizeof(struct ether_hdr);
+
+ /* Resize the packet if needed */
+ if (pkt_data_len < ncb) {
+ char *data = rte_pktmbuf_append(mbufs[i],
+ ncb - pkt_data_len);
+ if (data == NULL)
+ printf(
+ "Not enough space in decoder input packet");
+ }
+
+ /* Translate each bit into 1 LLR byte. */
+ start_bit_idx = 0;
+ out_idx = 0;
+ for (j = 0; j < 3; ++j) {
+ for (l = start_bit_idx; l < start_bit_idx + d; ++l) {
+ uint8_t *data = rte_pktmbuf_mtod_offset(
+ mbufs[i], uint8_t *,
+ sizeof(struct ether_hdr) + (l >> 3));
+ if (*data & (0x80 >> (l & 7)))
+ temp_buf[out_idx] = LLR_1_BIT;
+ else
+ temp_buf[out_idx] = LLR_0_BIT;
+ ++out_idx;
+ }
+ /* Padding bytes should be at the end of the sub-block.
+ */
+ memset(&temp_buf[out_idx], 0, nd);
+ out_idx += nd;
+ start_bit_idx += d;
+ }
+
+ rte_memcpy(rte_pktmbuf_mtod_offset(mbufs[i], uint8_t *,
+ sizeof(struct ether_hdr)), temp_buf, ncb);
+ }
+}
+
+static inline void
+verify_data(struct rte_mbuf **mbufs, uint16_t num_pkts)
+{
+ uint16_t i;
+ for (i = 0; i < num_pkts; ++i) {
+ struct rte_mbuf *out = mbufs[i];
+ struct rte_mbuf *in = out->userdata;
+
+ if (memcmp(rte_pktmbuf_mtod_offset(in, uint8_t *,
+ sizeof(struct ether_hdr)),
+ rte_pktmbuf_mtod_offset(out, uint8_t *,
+ sizeof(struct ether_hdr)),
+ K / 8 - CRC_24B_LEN))
+ printf("Input and output buffers are not equal!\n");
+ }
+}
+
+static int
+initialize_ports(struct app_config_params *app_params,
+ struct rte_mempool *ethdev_mbuf_mempool)
+{
+ int ret;
+ uint16_t port_id = app_params->port_id;
+ uint16_t q;
+ /* ethernet addresses of ports */
+ struct ether_addr bbdev_port_eth_addr;
+
+ /* initialize ports */
+ printf("\nInitializing port %u...\n", app_params->port_id);
+ ret = rte_eth_dev_configure(port_id, app_params->num_enc_cores,
+ app_params->num_dec_cores, &port_conf);
+
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%u\n",
+ ret, port_id);
+ return -1;
+ }
+
+ /* initialize RX queues for encoder */
+ for (q = 0; q < app_params->num_enc_cores; q++) {
+ ret = rte_eth_rx_queue_setup(port_id, q,
+ RTE_TEST_RX_DESC_DEFAULT,
+ rte_eth_dev_socket_id(port_id),
+ NULL, ethdev_mbuf_mempool);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, queue=%u\n",
+ ret, q);
+ return -1;
+ }
+ }
+ /* initialize TX queues for decoder */
+ for (q = 0; q < app_params->num_dec_cores; q++) {
+ ret = rte_eth_tx_queue_setup(port_id, q,
+ RTE_TEST_TX_DESC_DEFAULT,
+ rte_eth_dev_socket_id(port_id), NULL);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, queue=%u\n",
+ ret, q);
+ return -1;
+ }
+ }
+
+ rte_eth_promiscuous_enable(port_id);
+
+ rte_eth_macaddr_get(port_id, &bbdev_port_eth_addr);
+ print_mac(port_id, &bbdev_port_eth_addr);
+
+ return 0;
+}
+
+static void
+lcore_conf_init(struct app_config_params *app_params,
+ struct lcore_conf *lcore_conf,
+ struct rte_mempool **bbdev_op_pools,
+ struct rte_mempool *bbdev_mbuf_mempool,
+ struct rte_ring *enc_to_dec_ring,
+ struct lcore_statistics *lcore_stats)
+{
+ unsigned int lcore_id;
+ struct lcore_conf *lconf;
+ uint16_t rx_queue_id = 0;
+ uint16_t tx_queue_id = 0;
+ uint16_t enc_q_id = 0;
+ uint16_t dec_q_id = 0;
+
+ /* Configure lcores */
+ for (lcore_id = 0; lcore_id < 8 * sizeof(uint64_t); ++lcore_id) {
+ lconf = &lcore_conf[lcore_id];
+ lconf->core_type = 0;
+
+ if ((1ULL << lcore_id) & app_params->enc_core_mask) {
+ lconf->core_type |= (1 << RTE_BBDEV_OP_TURBO_ENC);
+ lconf->rx_queue_id = rx_queue_id++;
+ lconf->enc_queue_id =
+ app_params->enc_queue_ids[enc_q_id++];
+ }
+
+ if ((1ULL << lcore_id) & app_params->dec_core_mask) {
+ lconf->core_type |= (1 << RTE_BBDEV_OP_TURBO_DEC);
+ lconf->tx_queue_id = tx_queue_id++;
+ lconf->dec_queue_id =
+ app_params->dec_queue_ids[dec_q_id++];
+ }
+
+ lconf->bbdev_enc_op_pool =
+ bbdev_op_pools[RTE_BBDEV_OP_TURBO_ENC];
+ lconf->bbdev_dec_op_pool =
+ bbdev_op_pools[RTE_BBDEV_OP_TURBO_DEC];
+ lconf->bbdev_id = app_params->bbdev_id;
+ lconf->port_id = app_params->port_id;
+ lconf->enc_out_pool = bbdev_mbuf_mempool;
+ lconf->enc_to_dec_ring = enc_to_dec_ring;
+ lconf->lcore_stats = &lcore_stats[lcore_id];
+ }
+}
+
+static void
+print_lcore_stats(struct lcore_statistics *lstats, unsigned int lcore_id)
+{
+ static const char *stats_border = "_______";
+
+ printf("\nLcore %d: %s enqueued count:\t\t%u\n",
+ lcore_id, stats_border, lstats->enqueued);
+ printf("Lcore %d: %s dequeued count:\t\t%u\n",
+ lcore_id, stats_border, lstats->dequeued);
+ printf("Lcore %d: %s RX lost packets count:\t\t%u\n",
+ lcore_id, stats_border, lstats->rx_lost_packets);
+ printf("Lcore %d: %s encoder-to-decoder lost count:\t%u\n",
+ lcore_id, stats_border,
+ lstats->enc_to_dec_lost_packets);
+ printf("Lcore %d: %s TX lost packets count:\t\t%u\n",
+ lcore_id, stats_border, lstats->tx_lost_packets);
+}
+
+static void
+print_stats(struct stats_lcore_params *stats_lcore)
+{
+ unsigned int l_id;
+ unsigned int bbdev_id = stats_lcore->app_params->bbdev_id;
+ unsigned int port_id = stats_lcore->app_params->port_id;
+ int len, ret, i;
+
+ struct rte_eth_xstat *xstats;
+ struct rte_eth_xstat_name *xstats_names;
+ struct rte_bbdev_stats bbstats;
+ static const char *stats_border = "_______";
+
+ const char clr[] = { 27, '[', '2', 'J', '\0' };
+ const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("PORT STATISTICS:\n================\n");
+ len = rte_eth_xstats_get(port_id, NULL, 0);
+ if (len < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_xstats_get(%u) failed: %d", port_id,
+ len);
+
+ xstats = calloc(len, sizeof(*xstats));
+ if (xstats == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Failed to calloc memory for xstats");
+
+ ret = rte_eth_xstats_get(port_id, xstats, len);
+ if (ret < 0 || ret > len)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_xstats_get(%u) len%i failed: %d",
+ port_id, len, ret);
+
+ xstats_names = calloc(len, sizeof(*xstats_names));
+ if (xstats_names == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Failed to calloc memory for xstats_names");
+
+ ret = rte_eth_xstats_get_names(port_id, xstats_names, len);
+ if (ret < 0 || ret > len)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_xstats_get_names(%u) len%i failed: %d",
+ port_id, len, ret);
+
+ for (i = 0; i < len; i++) {
+ if (xstats[i].value > 0)
+ printf("Port %u: %s %s:\t\t%"PRIu64"\n",
+ port_id, stats_border,
+ xstats_names[i].name,
+ xstats[i].value);
+ }
+
+ printf("\nBBDEV STATISTICS:\n=================\n");
+ rte_bbdev_stats_get(bbdev_id, &bbstats);
+ printf("BBDEV %u: %s enqueue count:\t\t%"PRIu64"\n",
+ bbdev_id, stats_border,
+ bbstats.enqueued_count);
+ printf("BBDEV %u: %s dequeue count:\t\t%"PRIu64"\n",
+ bbdev_id, stats_border,
+ bbstats.dequeued_count);
+ printf("BBDEV %u: %s enqueue error count:\t\t%"PRIu64"\n",
+ bbdev_id, stats_border,
+ bbstats.enqueue_err_count);
+ printf("BBDEV %u: %s dequeue error count:\t\t%"PRIu64"\n\n",
+ bbdev_id, stats_border,
+ bbstats.dequeue_err_count);
+
+ printf("LCORE STATISTICS:\n=================\n");
+ for (l_id = 0; l_id < RTE_MAX_LCORE; ++l_id) {
+ if (stats_lcore->lconf[l_id].core_type == 0)
+ continue;
+ print_lcore_stats(stats_lcore->lconf[l_id].lcore_stats, l_id);
+ }
+}
+
+static int
+stats_loop(void *arg)
+{
+ struct stats_lcore_params *stats_lcore = arg;
+
+ while (!rte_atomic16_read(&global_exit_flag)) {
+ print_stats(stats_lcore);
+ rte_delay_ms(500);
+ }
+
+ return 0;
+}
+
+static inline void
+run_encoding(struct lcore_conf *lcore_conf)
+{
+ uint16_t i;
+ uint16_t port_id, rx_queue_id;
+ uint16_t bbdev_id, enc_queue_id;
+ uint16_t nb_rx, nb_enq, nb_deq, nb_sent;
+ struct rte_mbuf *rx_pkts_burst[MAX_PKT_BURST];
+ struct rte_mbuf *enc_out_pkts[MAX_PKT_BURST];
+ struct rte_bbdev_enc_op *bbdev_ops_burst[MAX_PKT_BURST];
+ struct lcore_statistics *lcore_stats;
+ struct rte_mempool *bbdev_op_pool, *enc_out_pool;
+ struct rte_ring *enc_to_dec_ring;
+ const int in_data_len = (def_op_enc.cb_params.k / 8) - CRC_24B_LEN;
+
+ lcore_stats = lcore_conf->lcore_stats;
+ port_id = lcore_conf->port_id;
+ rx_queue_id = lcore_conf->rx_queue_id;
+ bbdev_id = lcore_conf->bbdev_id;
+ enc_queue_id = lcore_conf->enc_queue_id;
+ bbdev_op_pool = lcore_conf->bbdev_enc_op_pool;
+ enc_out_pool = lcore_conf->enc_out_pool;
+ enc_to_dec_ring = lcore_conf->enc_to_dec_ring;
+
+ /* Read packet from RX queues*/
+ nb_rx = rte_eth_rx_burst(port_id, rx_queue_id, rx_pkts_burst,
+ MAX_PKT_BURST);
+ if (!nb_rx)
+ return;
+
+ if (unlikely(rte_mempool_get_bulk(enc_out_pool, (void **)enc_out_pkts,
+ nb_rx) != 0)) {
+ pktmbuf_free_bulk(rx_pkts_burst, nb_rx);
+ lcore_stats->rx_lost_packets += nb_rx;
+ return;
+ }
+
+ if (unlikely(rte_bbdev_enc_op_alloc_bulk(bbdev_op_pool, bbdev_ops_burst,
+ nb_rx) != 0)) {
+ pktmbuf_free_bulk(enc_out_pkts, nb_rx);
+ pktmbuf_free_bulk(rx_pkts_burst, nb_rx);
+ lcore_stats->rx_lost_packets += nb_rx;
+ return;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ char *data;
+ const uint16_t pkt_data_len =
+ rte_pktmbuf_data_len(rx_pkts_burst[i]) -
+ sizeof(struct ether_hdr);
+ /* save input mbuf pointer for later comparison */
+ enc_out_pkts[i]->userdata = rx_pkts_burst[i];
+
+ /* copy ethernet header */
+ rte_pktmbuf_reset(enc_out_pkts[i]);
+ data = rte_pktmbuf_append(enc_out_pkts[i],
+ sizeof(struct ether_hdr));
+ if (data == NULL) {
+ printf(
+ "Not enough space for ethernet header in encoder output mbuf\n");
+ continue;
+ }
+ add_ether_hdr(rx_pkts_burst[i], enc_out_pkts[i]);
+
+ /* set op */
+ bbdev_ops_burst[i]->turbo_enc = def_op_enc;
+
+ bbdev_ops_burst[i]->turbo_enc.input.data =
+ rx_pkts_burst[i];
+ bbdev_ops_burst[i]->turbo_enc.input.offset =
+ sizeof(struct ether_hdr);
+ /* Encoder will attach the CRC24B, adjust the length */
+ bbdev_ops_burst[i]->turbo_enc.input.length = in_data_len;
+
+ if (in_data_len < pkt_data_len)
+ rte_pktmbuf_trim(rx_pkts_burst[i], pkt_data_len -
+ in_data_len);
+ else if (in_data_len > pkt_data_len) {
+ data = rte_pktmbuf_append(rx_pkts_burst[i],
+ in_data_len - pkt_data_len);
+ if (data == NULL)
+ printf(
+ "Not enough storage in mbuf to perform the encoding\n");
+ }
+
+ bbdev_ops_burst[i]->turbo_enc.output.data =
+ enc_out_pkts[i];
+ bbdev_ops_burst[i]->turbo_enc.output.offset =
+ sizeof(struct ether_hdr);
+ }
+
+ /* Enqueue packets on BBDevice */
+ nb_enq = rte_bbdev_enqueue_enc_ops(bbdev_id, enc_queue_id,
+ bbdev_ops_burst, nb_rx);
+ if (unlikely(nb_enq < nb_rx)) {
+ pktmbuf_userdata_free_bulk(&enc_out_pkts[nb_enq],
+ nb_rx - nb_enq);
+ rte_bbdev_enc_op_free_bulk(&bbdev_ops_burst[nb_enq],
+ nb_rx - nb_enq);
+ lcore_stats->rx_lost_packets += nb_rx - nb_enq;
+
+ if (!nb_enq)
+ return;
+ }
+
+ lcore_stats->enqueued += nb_enq;
+
+ /* Dequeue packets from bbdev device*/
+ nb_deq = 0;
+ do {
+ nb_deq += rte_bbdev_dequeue_enc_ops(bbdev_id, enc_queue_id,
+ &bbdev_ops_burst[nb_deq], nb_enq - nb_deq);
+ } while (unlikely(nb_deq < nb_enq));
+
+ lcore_stats->dequeued += nb_deq;
+
+ /* Generate and add AWGN */
+ add_awgn(enc_out_pkts, nb_deq);
+
+ rte_bbdev_enc_op_free_bulk(bbdev_ops_burst, nb_deq);
+
+ /* Enqueue packets to encoder-to-decoder ring */
+ nb_sent = rte_ring_enqueue_burst(enc_to_dec_ring, (void **)enc_out_pkts,
+ nb_deq, NULL);
+ if (unlikely(nb_sent < nb_deq)) {
+ pktmbuf_userdata_free_bulk(&enc_out_pkts[nb_sent],
+ nb_deq - nb_sent);
+ lcore_stats->enc_to_dec_lost_packets += nb_deq - nb_sent;
+ }
+}
+
+static void
+run_decoding(struct lcore_conf *lcore_conf)
+{
+ uint16_t i;
+ uint16_t port_id, tx_queue_id;
+ uint16_t bbdev_id, bbdev_queue_id;
+ uint16_t nb_recv, nb_enq, nb_deq, nb_tx;
+ uint8_t *llr_temp_buf;
+ struct rte_mbuf *recv_pkts_burst[MAX_PKT_BURST];
+ struct rte_bbdev_dec_op *bbdev_ops_burst[MAX_PKT_BURST];
+ struct lcore_statistics *lcore_stats;
+ struct rte_mempool *bbdev_op_pool;
+ struct rte_ring *enc_to_dec_ring;
+
+ lcore_stats = lcore_conf->lcore_stats;
+ port_id = lcore_conf->port_id;
+ tx_queue_id = lcore_conf->tx_queue_id;
+ bbdev_id = lcore_conf->bbdev_id;
+ bbdev_queue_id = lcore_conf->dec_queue_id;
+ bbdev_op_pool = lcore_conf->bbdev_dec_op_pool;
+ enc_to_dec_ring = lcore_conf->enc_to_dec_ring;
+ llr_temp_buf = lcore_conf->llr_temp_buf;
+
+ /* Dequeue packets from the ring */
+ nb_recv = rte_ring_dequeue_burst(enc_to_dec_ring,
+ (void **)recv_pkts_burst, MAX_PKT_BURST, NULL);
+ if (!nb_recv)
+ return;
+
+ if (unlikely(rte_bbdev_dec_op_alloc_bulk(bbdev_op_pool, bbdev_ops_burst,
+ nb_recv) != 0)) {
+ pktmbuf_userdata_free_bulk(recv_pkts_burst, nb_recv);
+ lcore_stats->rx_lost_packets += nb_recv;
+ return;
+ }
+
+ transform_enc_out_dec_in(recv_pkts_burst, llr_temp_buf, nb_recv,
+ def_op_dec.cb_params.k);
+
+ for (i = 0; i < nb_recv; i++) {
+ /* set op */
+ bbdev_ops_burst[i]->turbo_dec = def_op_dec;
+
+ bbdev_ops_burst[i]->turbo_dec.input.data = recv_pkts_burst[i];
+ bbdev_ops_burst[i]->turbo_dec.input.offset =
+ sizeof(struct ether_hdr);
+ bbdev_ops_burst[i]->turbo_dec.input.length =
+ rte_pktmbuf_data_len(recv_pkts_burst[i])
+ - sizeof(struct ether_hdr);
+
+ bbdev_ops_burst[i]->turbo_dec.hard_output.data =
+ recv_pkts_burst[i];
+ bbdev_ops_burst[i]->turbo_dec.hard_output.offset =
+ sizeof(struct ether_hdr);
+ }
+
+ /* Enqueue packets on BBDevice */
+ nb_enq = rte_bbdev_enqueue_dec_ops(bbdev_id, bbdev_queue_id,
+ bbdev_ops_burst, nb_recv);
+ if (unlikely(nb_enq < nb_recv)) {
+ pktmbuf_userdata_free_bulk(&recv_pkts_burst[nb_enq],
+ nb_recv - nb_enq);
+ rte_bbdev_dec_op_free_bulk(&bbdev_ops_burst[nb_enq],
+ nb_recv - nb_enq);
+ lcore_stats->rx_lost_packets += nb_recv - nb_enq;
+
+ if (!nb_enq)
+ return;
+ }
+
+ lcore_stats->enqueued += nb_enq;
+
+ /* Dequeue packets from BBDevice */
+ nb_deq = 0;
+ do {
+ nb_deq += rte_bbdev_dequeue_dec_ops(bbdev_id, bbdev_queue_id,
+ &bbdev_ops_burst[nb_deq], nb_enq - nb_deq);
+ } while (unlikely(nb_deq < nb_enq));
+
+ lcore_stats->dequeued += nb_deq;
+
+ rte_bbdev_dec_op_free_bulk(bbdev_ops_burst, nb_deq);
+
+ verify_data(recv_pkts_burst, nb_deq);
+
+ /* Free the RX mbufs after verification */
+ for (i = 0; i < nb_deq; ++i)
+ rte_pktmbuf_free(recv_pkts_burst[i]->userdata);
+
+ /* Transmit the packets */
+ nb_tx = rte_eth_tx_burst(port_id, tx_queue_id, recv_pkts_burst, nb_deq);
+ if (unlikely(nb_tx < nb_deq)) {
+ pktmbuf_userdata_free_bulk(&recv_pkts_burst[nb_tx],
+ nb_deq - nb_tx);
+ lcore_stats->tx_lost_packets += nb_deq - nb_tx;
+ }
+}
+
+static int
+processing_loop(void *arg)
+{
+ struct lcore_conf *lcore_conf = arg;
+ const bool run_encoder = (lcore_conf->core_type &
+ (1 << RTE_BBDEV_OP_TURBO_ENC));
+ const bool run_decoder = (lcore_conf->core_type &
+ (1 << RTE_BBDEV_OP_TURBO_DEC));
+
+ while (!rte_atomic16_read(&global_exit_flag)) {
+ if (run_encoder)
+ run_encoding(lcore_conf);
+ if (run_decoder)
+ run_decoding(lcore_conf);
+ }
+
+ return 0;
+}
+
+static int
+prepare_bbdev_device(unsigned int dev_id, struct rte_bbdev_info *info,
+ struct app_config_params *app_params)
+{
+ int ret;
+ unsigned int q_id, dec_q_id, enc_q_id;
+ struct rte_bbdev_queue_conf qconf = {0};
+ uint16_t dec_qs_nb = app_params->num_dec_cores;
+ uint16_t enc_qs_nb = app_params->num_enc_cores;
+ uint16_t tot_qs = dec_qs_nb + enc_qs_nb;
+
+ ret = rte_bbdev_setup_queues(dev_id, tot_qs, info->socket_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "ERROR(%d): BBDEV %u not configured properly\n",
+ ret, dev_id);
+
+ /* setup device DEC queues */
+ qconf.socket = info->socket_id;
+ qconf.queue_size = info->drv.queue_size_lim;
+ qconf.op_type = RTE_BBDEV_OP_TURBO_DEC;
+
+ for (q_id = 0, dec_q_id = 0; q_id < dec_qs_nb; q_id++) {
+ ret = rte_bbdev_queue_configure(dev_id, q_id, &qconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "ERROR(%d): BBDEV %u DEC queue %u not configured properly\n",
+ ret, dev_id, q_id);
+ app_params->dec_queue_ids[dec_q_id++] = q_id;
+ }
+
+ /* setup device ENC queues */
+ qconf.op_type = RTE_BBDEV_OP_TURBO_ENC;
+
+ for (q_id = dec_qs_nb, enc_q_id = 0; q_id < tot_qs; q_id++) {
+ ret = rte_bbdev_queue_configure(dev_id, q_id, &qconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "ERROR(%d): BBDEV %u ENC queue %u not configured properly\n",
+ ret, dev_id, q_id);
+ app_params->enc_queue_ids[enc_q_id++] = q_id;
+ }
+
+ ret = rte_bbdev_start(dev_id);
+
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE, "ERROR(%d): BBDEV %u not started\n",
+ ret, dev_id);
+
+ printf("BBdev %u started\n", dev_id);
+
+ return 0;
+}
+
+static inline bool
+check_matching_capabilities(uint64_t mask, uint64_t required_mask)
+{
+ return (mask & required_mask) == required_mask;
+}
+
+static void
+enable_bbdev(struct app_config_params *app_params)
+{
+ struct rte_bbdev_info dev_info;
+ const struct rte_bbdev_op_cap *op_cap;
+ uint16_t bbdev_id = app_params->bbdev_id;
+ bool encoder_capable = false;
+ bool decoder_capable = false;
+
+ rte_bbdev_info_get(bbdev_id, &dev_info);
+ op_cap = dev_info.drv.capabilities;
+
+ while (op_cap->type != RTE_BBDEV_OP_NONE) {
+ if (op_cap->type == RTE_BBDEV_OP_TURBO_ENC) {
+ if (check_matching_capabilities(
+ op_cap->cap.turbo_enc.capability_flags,
+ def_op_enc.op_flags))
+ encoder_capable = true;
+ }
+
+ if (op_cap->type == RTE_BBDEV_OP_TURBO_DEC) {
+ if (check_matching_capabilities(
+ op_cap->cap.turbo_dec.capability_flags,
+ def_op_dec.op_flags))
+ decoder_capable = true;
+ }
+
+ op_cap++;
+ }
+
+ if (encoder_capable == false)
+ rte_exit(EXIT_FAILURE,
+ "The specified BBDev %u doesn't have required encoder capabilities!\n",
+ bbdev_id);
+ if (decoder_capable == false)
+ rte_exit(EXIT_FAILURE,
+ "The specified BBDev %u doesn't have required decoder capabilities!\n",
+ bbdev_id);
+
+ prepare_bbdev_device(bbdev_id, &dev_info, app_params);
+}
+
+int
+main(int argc, char **argv)
+{
+ int ret;
+ unsigned int nb_bbdevs, nb_ports, flags, lcore_id;
+ void *sigret;
+ struct app_config_params app_params = def_app_config;
+ struct rte_mempool *ethdev_mbuf_mempool, *bbdev_mbuf_mempool;
+ struct rte_mempool *bbdev_op_pools[RTE_BBDEV_OP_TYPE_COUNT];
+ struct lcore_conf lcore_conf[RTE_MAX_LCORE] = { {0} };
+ struct lcore_statistics lcore_stats[RTE_MAX_LCORE] = { {0} };
+ struct stats_lcore_params stats_lcore;
+ struct rte_ring *enc_to_dec_ring;
+ bool stats_thread_started = false;
+ unsigned int master_lcore_id = rte_get_master_lcore();
+
+ rte_atomic16_init(&global_exit_flag);
+
+ sigret = signal(SIGTERM, signal_handler);
+ if (sigret == SIG_ERR)
+ rte_exit(EXIT_FAILURE, "signal(%d, ...) failed", SIGTERM);
+
+ sigret = signal(SIGINT, signal_handler);
+ if (sigret == SIG_ERR)
+ rte_exit(EXIT_FAILURE, "signal(%d, ...) failed", SIGINT);
+
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+
+ argc -= ret;
+ argv += ret;
+
+ /* parse application arguments (after the EAL ones) */
+ ret = bbdev_parse_args(argc, argv, &app_params);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid BBDEV arguments\n");
+
+ /*create bbdev op pools*/
+ bbdev_op_pools[RTE_BBDEV_OP_TURBO_DEC] =
+ rte_bbdev_op_pool_create("bbdev_op_pool_dec",
+ RTE_BBDEV_OP_TURBO_DEC, NB_MBUF, 128, rte_socket_id());
+ bbdev_op_pools[RTE_BBDEV_OP_TURBO_ENC] =
+ rte_bbdev_op_pool_create("bbdev_op_pool_enc",
+ RTE_BBDEV_OP_TURBO_ENC, NB_MBUF, 128, rte_socket_id());
+
+ if ((bbdev_op_pools[RTE_BBDEV_OP_TURBO_DEC] == NULL) ||
+ (bbdev_op_pools[RTE_BBDEV_OP_TURBO_ENC] == NULL))
+ rte_exit(EXIT_FAILURE, "Cannot create bbdev op pools\n");
+
+ /* Create encoder to decoder ring */
+ flags = (app_params.num_enc_cores == 1) ? RING_F_SP_ENQ : 0;
+ if (app_params.num_dec_cores == 1)
+ flags |= RING_F_SC_DEQ;
+
+ enc_to_dec_ring = rte_ring_create("enc_to_dec_ring",
+ rte_align32pow2(NB_MBUF), rte_socket_id(), flags);
+
+ /* Get the number of available bbdev devices */
+ nb_bbdevs = rte_bbdev_count();
+ if (nb_bbdevs <= app_params.bbdev_id)
+ rte_exit(EXIT_FAILURE,
+ "%u BBDevs detected, cannot use BBDev with ID %u!\n",
+ nb_bbdevs, app_params.bbdev_id);
+ printf("Number of bbdevs detected: %d\n", nb_bbdevs);
+
+ /* Get the number of available ethdev devices */
+ nb_ports = rte_eth_dev_count();
+ if (nb_ports <= app_params.port_id)
+ rte_exit(EXIT_FAILURE,
+ "%u ports detected, cannot use port with ID %u!\n",
+ nb_ports, app_params.port_id);
+
+ /* create the mbuf mempool for ethdev pkts */
+ ethdev_mbuf_mempool = rte_pktmbuf_pool_create("ethdev_mbuf_pool",
+ NB_MBUF, MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+ if (ethdev_mbuf_mempool == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot create ethdev mbuf mempool\n");
+
+ /* create the mbuf mempool for encoder output */
+ bbdev_mbuf_mempool = rte_pktmbuf_pool_create("bbdev_mbuf_pool",
+ NB_MBUF, MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+ if (bbdev_mbuf_mempool == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot create ethdev mbuf mempool\n");
+
+ /* initialize ports */
+ ret = initialize_ports(&app_params, ethdev_mbuf_mempool);
+
+ /* Check if all requested lcores are available */
+ for (lcore_id = 0; lcore_id < 8 * sizeof(uint64_t); ++lcore_id)
+ if (((1ULL << lcore_id) & app_params.enc_core_mask) ||
+ ((1ULL << lcore_id) & app_params.dec_core_mask))
+ if (!rte_lcore_is_enabled(lcore_id))
+ rte_exit(EXIT_FAILURE,
+ "Requested lcore_id %u is not enabled!\n",
+ lcore_id);
+
+ /* Start ethernet port */
+ ret = rte_eth_dev_start(app_params.port_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, app_params.port_id);
+
+ ret = check_port_link_status(app_params.port_id);
+ if (ret < 0)
+ exit(EXIT_FAILURE);
+
+ /* start BBDevice and save BBDev queue IDs */
+ enable_bbdev(&app_params);
+
+ /* Initialize the port/queue configuration of each logical core */
+ lcore_conf_init(&app_params, lcore_conf, bbdev_op_pools,
+ bbdev_mbuf_mempool, enc_to_dec_ring, lcore_stats);
+
+ stats_lcore.app_params = &app_params;
+ stats_lcore.lconf = lcore_conf;
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_conf[lcore_id].core_type != 0)
+ /* launch per-lcore processing loop on slave lcores */
+ rte_eal_remote_launch(processing_loop,
+ &lcore_conf[lcore_id], lcore_id);
+ else if (!stats_thread_started) {
+ /* launch statistics printing loop */
+ rte_eal_remote_launch(stats_loop, &stats_lcore,
+ lcore_id);
+ stats_thread_started = true;
+ }
+ }
+
+ if (!stats_thread_started &&
+ lcore_conf[master_lcore_id].core_type != 0)
+ rte_exit(EXIT_FAILURE,
+ "Not enough lcores to run the statistics printing loop!");
+ else if (lcore_conf[master_lcore_id].core_type != 0)
+ processing_loop(&lcore_conf[master_lcore_id]);
+ else if (!stats_thread_started)
+ stats_loop(&stats_lcore);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ ret |= rte_eal_wait_lcore(lcore_id);
+ }
+
+ return ret;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library
2018-01-11 19:23 [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib Amr Mokhtar
` (3 preceding siblings ...)
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 5/5] bbdev: sample app Amr Mokhtar
@ 2018-01-11 19:23 ` Amr Mokhtar
2018-01-19 0:45 ` Thomas Monjalon
4 siblings, 1 reply; 11+ messages in thread
From: Amr Mokhtar @ 2018-01-11 19:23 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara, Amr Mokhtar
Hello,
Latest update for Wireless Device abstraction library.
v5:
* Fixed patch titles
* Fixed build error in bbdev test app
* Few editorials
v4:
* Organized patch set
* Enhanced bbdev sample app
* Turbo operation application interface is better documented
http://dpdk.org/dev/patchwork/patch/32653/
http://dpdk.org/dev/patchwork/patch/32654/
http://dpdk.org/dev/patchwork/patch/32655/
http://dpdk.org/dev/patchwork/patch/32657/
http://dpdk.org/dev/patchwork/patch/32656/
v3:
* Cleaner Turbo Code operation interface
* Enahnced SW Turbo PMD (turbo_sw)
* Removed pci & vdev dependency from bbdev library interface
* Updated download instructions for Intel FlexRAN SDK library
http://dpdk.org/dev/patchwork/patch/31990/
http://dpdk.org/dev/patchwork/patch/31991/
http://dpdk.org/dev/patchwork/patch/31992/
http://dpdk.org/dev/patchwork/patch/31993/
http://dpdk.org/dev/patchwork/patch/31994/
v2:
* Split the functionality of rte_bbdev_configure() into smaller portions ->
rte_bbdev_setup_queues() and rte_bbdev_enale_intr()
* Split rte_bbdev_enqueue() -> rte_bbdev_enc_enqueue() and rte_bbdev_dec_enqueue()
* Split rte_bbdev_dequeue() -> rte_bbdev_enc_dequeue() and rte_bbdev_dec_dequeue()
* Removed attached flag until hotplug is properly supported in DPDK
* More details on the installation of FlexRAN SDK libraries in accordance with Turbo_sw PMD
* Minor build fixes for other targets: bsdapp-gcc, bsdapp-clang and linuxapp-clang.
* Better-organized patchwork
http://dpdk.org/dev/patchwork/patch/30498/
http://dpdk.org/dev/patchwork/patch/30499/
http://dpdk.org/dev/patchwork/patch/30500/
http://dpdk.org/dev/patchwork/patch/30501/
http://dpdk.org/dev/patchwork/patch/30502/
v1:
* Initial release of BBDEV library.
* Support Turbo Code FEC with two virtual devices (vdev):
- Null Turbo PMD
- Turbo_sw PMD
* A complete Test suite for Turbo Encode/Decode and None operations
* Test Vectors parsing and testing functionality
* Sample App for a looped-back bbdev with ethdev
* Documentation in rst format for all new components
http://dpdk.org/dev/patchwork/patch/29447/
http://dpdk.org/dev/patchwork/patch/29448/
http://dpdk.org/dev/patchwork/patch/29450/
http://dpdk.org/dev/patchwork/patch/29449/
http://dpdk.org/dev/patchwork/patch/29452/
http://dpdk.org/dev/patchwork/patch/29451/
RFC:
http://dpdk.org/dev/patchwork/patch/27984/
Amr Mokhtar (5):
bbdev: introducing wireless base band device abstraction lib
bbdev: null device driver
bbdev: software turbo driver
bbdev: test applications
bbdev: sample app
MAINTAINERS | 11 +
app/Makefile | 4 +
app/test-bbdev/Makefile | 22 +
app/test-bbdev/main.c | 325 +++
app/test-bbdev/main.h | 120 ++
app/test-bbdev/test-bbdev.py | 111 +
app/test-bbdev/test_bbdev.c | 1378 +++++++++++++
app/test-bbdev/test_bbdev_perf.c | 2136 ++++++++++++++++++++
app/test-bbdev/test_bbdev_vector.c | 937 +++++++++
app/test-bbdev/test_bbdev_vector.h | 71 +
app/test-bbdev/test_vectors/bbdev_vector_null.data | 5 +
.../test_vectors/bbdev_vector_td_default.data | 54 +
.../test_vectors/bbdev_vector_te_default.data | 33 +
config/common_base | 21 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf | 1 +
doc/guides/bbdevs/index.rst | 12 +
doc/guides/bbdevs/null.rst | 49 +
doc/guides/bbdevs/turbo_sw.rst | 147 ++
doc/guides/index.rst | 1 +
doc/guides/prog_guide/bbdev.rst | 585 ++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/rel_notes/release_18_02.rst | 12 +
doc/guides/sample_app_ug/bbdev_app.rst | 132 ++
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/tools/index.rst | 1 +
doc/guides/tools/testbbdev.rst | 538 +++++
drivers/Makefile | 2 +
drivers/bbdev/Makefile | 14 +
drivers/bbdev/null/Makefile | 24 +
drivers/bbdev/null/bbdev_null.c | 346 ++++
drivers/bbdev/null/rte_pmd_bbdev_null_version.map | 3 +
drivers/bbdev/turbo_sw/Makefile | 41 +
drivers/bbdev/turbo_sw/bbdev_turbo_software.c | 1206 +++++++++++
.../turbo_sw/rte_pmd_bbdev_turbo_sw_version.map | 3 +
examples/Makefile | 1 +
examples/bbdev_app/Makefile | 22 +
examples/bbdev_app/main.c | 1144 +++++++++++
lib/Makefile | 2 +
lib/librte_bbdev/Makefile | 28 +
lib/librte_bbdev/rte_bbdev.c | 1117 ++++++++++
lib/librte_bbdev/rte_bbdev.h | 715 +++++++
lib/librte_bbdev/rte_bbdev_op.h | 638 ++++++
lib/librte_bbdev/rte_bbdev_pmd.h | 198 ++
lib/librte_bbdev/rte_bbdev_version.map | 37 +
mk/rte.app.mk | 13 +
46 files changed, 12263 insertions(+)
create mode 100644 app/test-bbdev/Makefile
create mode 100644 app/test-bbdev/main.c
create mode 100644 app/test-bbdev/main.h
create mode 100755 app/test-bbdev/test-bbdev.py
create mode 100644 app/test-bbdev/test_bbdev.c
create mode 100644 app/test-bbdev/test_bbdev_perf.c
create mode 100644 app/test-bbdev/test_bbdev_vector.c
create mode 100644 app/test-bbdev/test_bbdev_vector.h
create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_null.data
create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_td_default.data
create mode 100644 app/test-bbdev/test_vectors/bbdev_vector_te_default.data
create mode 100644 doc/guides/bbdevs/index.rst
create mode 100644 doc/guides/bbdevs/null.rst
create mode 100644 doc/guides/bbdevs/turbo_sw.rst
create mode 100644 doc/guides/prog_guide/bbdev.rst
create mode 100644 doc/guides/sample_app_ug/bbdev_app.rst
create mode 100644 doc/guides/tools/testbbdev.rst
create mode 100644 drivers/bbdev/Makefile
create mode 100644 drivers/bbdev/null/Makefile
create mode 100644 drivers/bbdev/null/bbdev_null.c
create mode 100644 drivers/bbdev/null/rte_pmd_bbdev_null_version.map
create mode 100644 drivers/bbdev/turbo_sw/Makefile
create mode 100644 drivers/bbdev/turbo_sw/bbdev_turbo_software.c
create mode 100644 drivers/bbdev/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
create mode 100644 examples/bbdev_app/Makefile
create mode 100644 examples/bbdev_app/main.c
create mode 100644 lib/librte_bbdev/Makefile
create mode 100644 lib/librte_bbdev/rte_bbdev.c
create mode 100644 lib/librte_bbdev/rte_bbdev.h
create mode 100644 lib/librte_bbdev/rte_bbdev_op.h
create mode 100644 lib/librte_bbdev/rte_bbdev_pmd.h
create mode 100644 lib/librte_bbdev/rte_bbdev_version.map
--
2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver Amr Mokhtar
@ 2018-01-19 0:23 ` Thomas Monjalon
2018-01-31 18:20 ` Mokhtar, Amr
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2018-01-19 0:23 UTC (permalink / raw)
To: Amr Mokhtar
Cc: dev, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara
11/01/2018 20:23, Amr Mokhtar:
> + export DIR_WIRELESS_SDK=<path-to-workspace>/FlexRAN-1.3.0/SDK-R1.3.0/sdk/
This variable looks to be unused.
Adding this in test-build.sh (while applying):
test -z "$FLEXRAN_SDK" || \
sed -ri 's,(BBDEV_TURBO_SW=)n,\1y,' $1/.config
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library Amr Mokhtar
@ 2018-01-19 0:45 ` Thomas Monjalon
2018-01-31 20:29 ` Mokhtar, Amr
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2018-01-19 0:45 UTC (permalink / raw)
To: Amr Mokhtar
Cc: dev, ferruh.yigit, anatoly.burakov, pablo.de.lara.guarch,
niall.power, chris.macnamara
11/01/2018 20:23, Amr Mokhtar:
> Hello,
> Latest update for Wireless Device abstraction library.
Applied with few changes:
- fixed title formats
- moved bbdev between ethdev and cryptodev in lists
- added in test-build.sh
- fixed compilation as shared library (must be better fixed in RC2)
undefined reference to `bbdev_logtype', added to .map
Thanks for the good work, and welcome in DPDK!
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver
2018-01-19 0:23 ` Thomas Monjalon
@ 2018-01-31 18:20 ` Mokhtar, Amr
0 siblings, 0 replies; 11+ messages in thread
From: Mokhtar, Amr @ 2018-01-31 18:20 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Yigit, Ferruh, Burakov, Anatoly, De Lara Guarch, Pablo,
Power, Niall, Macnamara, Chris
Hi Thomas,
Thanks for adding that..
The env variable (DIR_WIRELESS_SDK) is needed by FLEXRAN SDK itself.
Best regards,
Amr
-----Original Message-----
From: Thomas Monjalon [mailto:thomas@monjalon.net]
Sent: Friday 19 January 2018 00:24
To: Mokhtar, Amr <amr.mokhtar@intel.com>
Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Burakov, Anatoly <anatoly.burakov@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Power, Niall <niall.power@intel.com>; Macnamara, Chris <chris.macnamara@intel.com>
Subject: Re: [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver
11/01/2018 20:23, Amr Mokhtar:
> + export DIR_WIRELESS_SDK=<path-to-workspace>/FlexRAN-1.3.0/SDK-R1.3.0/sdk/
This variable looks to be unused.
Adding this in test-build.sh (while applying):
test -z "$FLEXRAN_SDK" || \
sed -ri 's,(BBDEV_TURBO_SW=)n,\1y,' $1/.config
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library
2018-01-19 0:45 ` Thomas Monjalon
@ 2018-01-31 20:29 ` Mokhtar, Amr
2018-01-31 20:41 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Mokhtar, Amr @ 2018-01-31 20:29 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Yigit, Ferruh, Burakov, Anatoly, De Lara Guarch, Pablo,
Power, Niall, Macnamara, Chris
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday 19 January 2018 00:45
> To: Mokhtar, Amr <amr.mokhtar@intel.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Burakov, Anatoly
> <anatoly.burakov@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Power, Niall <niall.power@intel.com>;
> Macnamara, Chris <chris.macnamara@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device
> (bbdev) abstraction library
>
> 11/01/2018 20:23, Amr Mokhtar:
> > Hello,
> > Latest update for Wireless Device abstraction library.
>
> Applied with few changes:
> - fixed title formats
> - moved bbdev between ethdev and cryptodev in lists
> - added in test-build.sh
> - fixed compilation as shared library (must be better fixed in RC2)
> undefined reference to `bbdev_logtype', added to .map
Can you explain more what shared library compilation problem
were experienced? Is it the missing 'bbdev_logtype' ?
>
> Thanks for the good work, and welcome in DPDK!
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library
2018-01-31 20:29 ` Mokhtar, Amr
@ 2018-01-31 20:41 ` Thomas Monjalon
0 siblings, 0 replies; 11+ messages in thread
From: Thomas Monjalon @ 2018-01-31 20:41 UTC (permalink / raw)
To: Mokhtar, Amr
Cc: dev, Yigit, Ferruh, Burakov, Anatoly, De Lara Guarch, Pablo,
Power, Niall, Macnamara, Chris
31/01/2018 21:29, Mokhtar, Amr:
> From: Thomas Monjalon
> > 11/01/2018 20:23, Amr Mokhtar:
> > > Hello,
> > > Latest update for Wireless Device abstraction library.
> >
> > Applied with few changes:
> > - fixed title formats
> > - moved bbdev between ethdev and cryptodev in lists
> > - added in test-build.sh
> > - fixed compilation as shared library (must be better fixed in RC2)
> > undefined reference to `bbdev_logtype', added to .map
>
> Can you explain more what shared library compilation problem
> were experienced? Is it the missing 'bbdev_logtype' ?
Yes
Please avoid exporting this variable.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2018-01-31 20:42 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-11 19:23 [dpdk-dev] [PATCH v5 1/5] bbdev: introducing wireless base band device abstraction lib Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 2/5] bbdev: null device driver Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 3/5] bbdev: software turbo driver Amr Mokhtar
2018-01-19 0:23 ` Thomas Monjalon
2018-01-31 18:20 ` Mokhtar, Amr
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 4/5] bbdev: test applications Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 5/5] bbdev: sample app Amr Mokhtar
2018-01-11 19:23 ` [dpdk-dev] [PATCH v5 0/5] Introducing Wirless Base Band Device (bbdev) abstraction library Amr Mokhtar
2018-01-19 0:45 ` Thomas Monjalon
2018-01-31 20:29 ` Mokhtar, Amr
2018-01-31 20:41 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).