DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device
@ 2022-04-04 21:13 Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 1/9] baseband/acc101: introduce PMD for ACC101 Nicolas Chautru
                   ` (9 more replies)
  0 siblings, 10 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

This serie introduces the PMD for the new bbdev device ACC101
(aka Mount Cirrus). This is a derivative from previous Mount Bryce
which includes silicon improvement, bug fixes, capacity improvement
for 5GNR and feature improvement.


Nicolas Chautru (9):
  baseband/acc101: introduce PMD for ACC101
  baseband/acc101: add HW register definition
  baseband/acc101: add info get function
  baseband/acc101: add queue configuration
  baseband/acc101: add LDPC processing
  baseband/acc101: support HARQ loopback
  baseband/acc101: support 4G processing
  baseband/acc101: support MSI interrupt
  baseband/acc101: add device configure function

 MAINTAINERS                              |    3 +
 app/test-bbdev/meson.build               |    3 +
 app/test-bbdev/test_bbdev_perf.c         |   69 +
 doc/guides/bbdevs/acc101.rst             |  237 ++
 doc/guides/bbdevs/features/acc101.ini    |   13 +
 doc/guides/bbdevs/index.rst              |    1 +
 doc/guides/rel_notes/release_22_07.rst   |    4 +
 drivers/baseband/acc101/acc101_pf_enum.h | 1128 ++++++++
 drivers/baseband/acc101/acc101_vf_enum.h |   79 +
 drivers/baseband/acc101/meson.build      |    8 +
 drivers/baseband/acc101/rte_acc101_cfg.h |  113 +
 drivers/baseband/acc101/rte_acc101_pmd.c | 4457 ++++++++++++++++++++++++++++++
 drivers/baseband/acc101/rte_acc101_pmd.h |  631 +++++
 drivers/baseband/acc101/version.map      |   10 +
 drivers/baseband/meson.build             |    1 +
 15 files changed, 6757 insertions(+)
 create mode 100644 doc/guides/bbdevs/acc101.rst
 create mode 100644 doc/guides/bbdevs/features/acc101.ini
 create mode 100644 drivers/baseband/acc101/acc101_pf_enum.h
 create mode 100644 drivers/baseband/acc101/acc101_vf_enum.h
 create mode 100644 drivers/baseband/acc101/meson.build
 create mode 100644 drivers/baseband/acc101/rte_acc101_cfg.h
 create mode 100644 drivers/baseband/acc101/rte_acc101_pmd.c
 create mode 100644 drivers/baseband/acc101/rte_acc101_pmd.h
 create mode 100644 drivers/baseband/acc101/version.map

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 1/9] baseband/acc101: introduce PMD for ACC101
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 2/9] baseband/acc101: add HW register definition Nicolas Chautru
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Skeleton code and documentation for the ACC101
bbdev PMD.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 MAINTAINERS                              |   3 +
 doc/guides/bbdevs/acc101.rst             | 193 +++++++++++++++++++++++++++++++
 doc/guides/bbdevs/features/acc101.ini    |  13 +++
 doc/guides/bbdevs/index.rst              |   1 +
 drivers/baseband/acc101/meson.build      |   6 +
 drivers/baseband/acc101/rte_acc101_pmd.c | 178 ++++++++++++++++++++++++++++
 drivers/baseband/acc101/rte_acc101_pmd.h |  53 +++++++++
 drivers/baseband/acc101/version.map      |   3 +
 drivers/baseband/meson.build             |   1 +
 9 files changed, 451 insertions(+)
 create mode 100644 doc/guides/bbdevs/acc101.rst
 create mode 100644 doc/guides/bbdevs/features/acc101.ini
 create mode 100644 drivers/baseband/acc101/meson.build
 create mode 100644 drivers/baseband/acc101/rte_acc101_pmd.c
 create mode 100644 drivers/baseband/acc101/rte_acc101_pmd.h
 create mode 100644 drivers/baseband/acc101/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 15008c0..c4a8047 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1313,6 +1313,9 @@ F: doc/guides/bbdevs/features/fpga_5gnr_fec.ini
 F: drivers/baseband/acc100/
 F: doc/guides/bbdevs/acc100.rst
 F: doc/guides/bbdevs/features/acc100.ini
+F: drivers/baseband/acc101/
+F: doc/guides/bbdevs/acc101.rst
+F: doc/guides/bbdevs/features/acc101.ini
 
 Null baseband
 M: Nicolas Chautru <nicolas.chautru@intel.com>
diff --git a/doc/guides/bbdevs/acc101.rst b/doc/guides/bbdevs/acc101.rst
new file mode 100644
index 0000000..fb5f9f1
--- /dev/null
+++ b/doc/guides/bbdevs/acc101.rst
@@ -0,0 +1,193 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Intel Corporation
+
+Intel(R) ACC101 5G/4G FEC Poll Mode Driver
+==========================================
+
+The BBDEV ACC101 5G/4G FEC poll mode driver (PMD) supports an
+implementation of a VRAN FEC wireless acceleration function.
+This device is also known as Mount Cirrus.
+This is a follow-up to Mount Bryce (ACC100) and includes fixes, improved
+feature set for error scenarios and performance increase.
+
+Features
+--------
+
+ACC101 5G/4G FEC PMD supports the following features:
+
+- 16 VFs per PF (physical device)
+- Maximum of 128 queues per VF
+- PCIe Gen-3 x16 Interface
+- MSI
+- SR-IOV
+
+Installation
+------------
+
+Section 3 of the DPDK manual provides instructions on installing and compiling DPDK.
+
+DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual.
+The bbdev test application has been tested with a configuration 40 x 1GB hugepages. The
+hugepage configuration of a server may be examined using:
+
+.. code-block:: console
+
+   grep Huge* /proc/meminfo
+
+
+Initialization
+--------------
+
+When the device first powers up, its PCI Physical Functions (PF) can be listed through this command:
+
+.. code-block:: console
+
+  sudo lspci -vd8086:57c4
+
+The physical and virtual functions are compatible with Linux UIO drivers:
+``vfio`` and ``igb_uio``. However, in order to work the ACC101 5G/4G
+FEC device first needs to be bound to one of these linux drivers through DPDK.
+
+
+Bind PF UIO driver(s)
+~~~~~~~~~~~~~~~~~~~~~
+
+Install the DPDK igb_uio driver, bind it with the PF PCI device ID and use
+``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK UIO driver.
+
+The igb_uio driver may be bound to the PF PCI device using one of two methods:
+
+
+1. PCI functions (physical or virtual, depending on the use case) can be bound to
+the UIO driver by repeating this command for every function.
+
+.. code-block:: console
+
+  cd <dpdk-top-level-directory>
+  insmod ./build/kmod/igb_uio.ko
+  echo "8086 57c4" > /sys/bus/pci/drivers/igb_uio/new_id
+  lspci -vd8086:57c4
+
+
+2. Another way to bind PF with DPDK UIO driver is by using the ``dpdk-devbind.py`` tool
+
+.. code-block:: console
+
+  cd <dpdk-top-level-directory>
+  ./usertools/dpdk-devbind.py -b igb_uio 0000:06:00.0
+
+where the PCI device ID (example: 0000:06:00.0) is obtained using lspci -vd8086:57c4
+
+
+In a similar way the ACC101 5G/4G FEC PF may be bound with vfio-pci as any PCIe device.
+
+
+Enable Virtual Functions
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now, it should be visible in the printouts that PCI PF is under igb_uio control
+"``Kernel driver in use: igb_uio``"
+
+To show the number of available VFs on the device, read ``sriov_totalvfs`` file..
+
+.. code-block:: console
+
+  cat /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_totalvfs
+
+  where 0000\:<b>\:<d>.<f> is the PCI device ID
+
+
+To enable VFs via igb_uio, echo the number of virtual functions intended to
+enable to ``max_vfs`` file..
+
+.. code-block:: console
+
+  echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/max_vfs
+
+
+Afterwards, all VFs must be bound to appropriate UIO drivers as required, same
+way it was done with the physical function previously.
+
+Enabling SR-IOV via vfio driver is pretty much the same, except that the file
+name is different:
+
+.. code-block:: console
+
+  echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_numvfs
+
+
+Configure the VFs through PF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The PCI virtual functions must be configured before working or getting assigned
+to VMs/Containers. The configuration involves allocating the number of hardware
+queues, priorities, load balance, bandwidth and other settings necessary for the
+device to perform FEC functions.
+
+This configuration needs to be executed at least once after reboot or PCI FLR and can
+be achieved by using the function ``acc101_configure()``, which sets up the
+parameters defined in ``acc101_conf`` structure.
+
+Test Application
+----------------
+
+BBDEV provides a test application, ``test-bbdev.py`` and range of test data for testing
+the functionality of ACC101 5G/4G FEC encode and decode, depending on the device's
+capabilities. The test application is located under app->test-bbdev folder and has the
+following options:
+
+.. code-block:: console
+
+  "-p", "--testapp-path": specifies path to the bbdev test app.
+  "-e", "--eal-params"	: EAL arguments which are passed to the test app.
+  "-t", "--timeout"	: Timeout in seconds (default=300).
+  "-c", "--test-cases"	: Defines test cases to run. Run all if not specified.
+  "-v", "--test-vector"	: Test vector path (default=dpdk_path+/app/test-bbdev/test_vectors/bbdev_null.data).
+  "-n", "--num-ops"	: Number of operations to process on device (default=32).
+  "-b", "--burst-size"	: Operations enqueue/dequeue burst size (default=32).
+  "-s", "--snr"		: SNR in dB used when generating LLRs for bler tests.
+  "-s", "--iter_max"	: Number of iterations for LDPC decoder.
+  "-l", "--num-lcores"	: Number of lcores to run (default=16).
+  "-i", "--init-device" : Initialise PF device with default values.
+
+
+To execute the test application tool using simple decode or encode data,
+type one of the following:
+
+.. code-block:: console
+
+  ./test-bbdev.py -c validation -n 64 -b 1 -v ./ldpc_dec_default.data
+  ./test-bbdev.py -c validation -n 64 -b 1 -v ./ldpc_enc_default.data
+
+
+The test application ``test-bbdev.py``, supports the ability to configure the PF device with
+a default set of values, if the "-i" or "- -init-device" option is included. The default values
+are defined in test_bbdev_perf.c.
+
+
+Test Vectors
+~~~~~~~~~~~~
+
+In addition to the simple LDPC decoder and LDPC encoder tests, bbdev also provides
+a range of additional tests under the test_vectors folder, which may be useful. The results
+of these tests will depend on the ACC101 5G/4G FEC capabilities which may cause some
+testcases to be skipped, but no failure should be reported.
+
+
+Alternate Baseband Device configuration tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On top of the embedded configuration feature supported in test-bbdev using "- -init-device"
+option mentioned above, there is also a tool available to perform that device configuration
+using a companion application.
+The ``pf_bb_config`` application notably enables then to run bbdev-test from the VF
+and not only limited to the PF as captured above.
+
+See for more details: https://github.com/intel/pf-bb-config
+
+Specifically for the BBDEV ACC101 PMD, the command below can be used:
+
+.. code-block:: console
+
+  ./pf_bb_config ACC101 -c acc101/acc101_config_4vf_4g5g.cfg
+  ./test-bbdev.py -e="-c 0xff0 -a${VF_PCI_ADDR}" -c validation -l 1 -v ./ldpc_dec_default.data
diff --git a/doc/guides/bbdevs/features/acc101.ini b/doc/guides/bbdevs/features/acc101.ini
new file mode 100644
index 0000000..1a62d13
--- /dev/null
+++ b/doc/guides/bbdevs/features/acc101.ini
@@ -0,0 +1,13 @@
+;
+; Supported features of the 'acc101' bbdev driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Turbo Decoder (4G)     = N
+Turbo Encoder (4G)     = N
+LDPC Decoder (5G)      = N
+LDPC Encoder (5G)      = N
+LLR/HARQ Compression   = N
+External DDR Access    = N
+HW Accelerated         = Y
diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst
index cedd706..e76883c 100644
--- a/doc/guides/bbdevs/index.rst
+++ b/doc/guides/bbdevs/index.rst
@@ -14,4 +14,5 @@ Baseband Device Drivers
     fpga_lte_fec
     fpga_5gnr_fec
     acc100
+    acc101
     la12xx
diff --git a/drivers/baseband/acc101/meson.build b/drivers/baseband/acc101/meson.build
new file mode 100644
index 0000000..e94eb2b
--- /dev/null
+++ b/drivers/baseband/acc101/meson.build
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+deps += ['bbdev', 'bus_vdev', 'ring', 'pci', 'bus_pci']
+
+sources = files('rte_acc101_pmd.c')
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
new file mode 100644
index 0000000..dff3834
--- /dev/null
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_byteorder.h>
+#include <rte_errno.h>
+#include <rte_branch_prediction.h>
+#include <rte_hexdump.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#ifdef RTE_BBDEV_OFFLOAD_COST
+#include <rte_cycles.h>
+#endif
+
+#include <rte_bbdev.h>
+#include <rte_bbdev_pmd.h>
+#include "rte_acc101_pmd.h"
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+RTE_LOG_REGISTER_DEFAULT(acc101_logtype, DEBUG);
+#else
+RTE_LOG_REGISTER_DEFAULT(acc101_logtype, NOTICE);
+#endif
+
+/* Free memory used for software rings */
+static int
+acc101_dev_close(struct rte_bbdev *dev)
+{
+	RTE_SET_USED(dev);
+	return 0;
+}
+
+static const struct rte_bbdev_ops acc101_bbdev_ops = {
+	.close = acc101_dev_close,
+};
+
+/* ACC101 PCI PF address map */
+static struct rte_pci_id pci_id_acc101_pf_map[] = {
+	{
+		RTE_PCI_DEVICE(RTE_ACC101_VENDOR_ID, RTE_ACC101_PF_DEVICE_ID)
+	},
+	{.device_id = 0},
+};
+
+/* ACC101 PCI VF address map */
+static struct rte_pci_id pci_id_acc101_vf_map[] = {
+	{
+		RTE_PCI_DEVICE(RTE_ACC101_VENDOR_ID, RTE_ACC101_VF_DEVICE_ID)
+	},
+	{.device_id = 0},
+};
+
+/* Initialization Function */
+static void
+acc101_bbdev_init(struct rte_bbdev *dev, struct rte_pci_driver *drv)
+{
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev->dev_ops = &acc101_bbdev_ops;
+
+	((struct acc101_device *) dev->data->dev_private)->pf_device =
+			!strcmp(drv->driver.name,
+					RTE_STR(ACC101PF_DRIVER_NAME));
+	((struct acc101_device *) dev->data->dev_private)->mmio_base =
+			pci_dev->mem_resource[0].addr;
+
+	rte_bbdev_log_debug("Init device %s [%s] @ vaddr %p paddr %#"PRIx64"",
+			drv->driver.name, dev->data->name,
+			(void *)pci_dev->mem_resource[0].addr,
+			pci_dev->mem_resource[0].phys_addr);
+}
+
+static int acc101_pci_probe(struct rte_pci_driver *pci_drv,
+	struct rte_pci_device *pci_dev)
+{
+	struct rte_bbdev *bbdev = NULL;
+	char dev_name[RTE_BBDEV_NAME_MAX_LEN];
+
+	if (pci_dev == NULL) {
+		rte_bbdev_log(ERR, "NULL PCI device");
+		return -EINVAL;
+	}
+
+	rte_pci_device_name(&pci_dev->addr, dev_name, sizeof(dev_name));
+
+	/* Allocate memory to be used privately by drivers */
+	bbdev = rte_bbdev_allocate(pci_dev->device.name);
+	if (bbdev == NULL)
+		return -ENODEV;
+
+	/* allocate device private memory */
+	bbdev->data->dev_private = rte_zmalloc_socket(dev_name,
+			sizeof(struct acc101_device), RTE_CACHE_LINE_SIZE,
+			pci_dev->device.numa_node);
+
+	if (bbdev->data->dev_private == NULL) {
+		rte_bbdev_log(CRIT,
+				"Allocate of %zu bytes for device \"%s\" failed",
+				sizeof(struct acc101_device), dev_name);
+				rte_bbdev_release(bbdev);
+			return -ENOMEM;
+	}
+
+	/* Fill HW specific part of device structure */
+	bbdev->device = &pci_dev->device;
+	bbdev->intr_handle = pci_dev->intr_handle;
+	bbdev->data->socket_id = pci_dev->device.numa_node;
+
+	/* Invoke ACC101 device initialization function */
+	acc101_bbdev_init(bbdev, pci_drv);
+
+	rte_bbdev_log_debug("Initialised bbdev %s (id = %u)",
+			dev_name, bbdev->data->dev_id);
+	return 0;
+}
+
+static int acc101_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct rte_bbdev *bbdev;
+	int ret;
+	uint8_t dev_id;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	/* Find device */
+	bbdev = rte_bbdev_get_named_dev(pci_dev->device.name);
+	if (bbdev == NULL) {
+		rte_bbdev_log(CRIT,
+				"Couldn't find HW dev \"%s\" to uninitialise it",
+				pci_dev->device.name);
+		return -ENODEV;
+	}
+	dev_id = bbdev->data->dev_id;
+
+	/* free device private memory before close */
+	rte_free(bbdev->data->dev_private);
+
+	/* Close device */
+	ret = rte_bbdev_close(dev_id);
+	if (ret < 0)
+		rte_bbdev_log(ERR,
+				"Device %i failed to close during uninit: %i",
+				dev_id, ret);
+
+	/* release bbdev from library */
+	rte_bbdev_release(bbdev);
+
+	rte_bbdev_log_debug("Destroyed bbdev = %u", dev_id);
+
+	return 0;
+}
+
+static struct rte_pci_driver acc101_pci_pf_driver = {
+		.probe = acc101_pci_probe,
+		.remove = acc101_pci_remove,
+		.id_table = pci_id_acc101_pf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING
+};
+
+static struct rte_pci_driver acc101_pci_vf_driver = {
+		.probe = acc101_pci_probe,
+		.remove = acc101_pci_remove,
+		.id_table = pci_id_acc101_vf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING
+};
+
+RTE_PMD_REGISTER_PCI(ACC101PF_DRIVER_NAME, acc101_pci_pf_driver);
+RTE_PMD_REGISTER_PCI_TABLE(ACC101PF_DRIVER_NAME, pci_id_acc101_pf_map);
+RTE_PMD_REGISTER_PCI(ACC101VF_DRIVER_NAME, acc101_pci_vf_driver);
+RTE_PMD_REGISTER_PCI_TABLE(ACC101VF_DRIVER_NAME, pci_id_acc101_vf_map);
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
new file mode 100644
index 0000000..499c341
--- /dev/null
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _RTE_ACC101_PMD_H_
+#define _RTE_ACC101_PMD_H_
+
+/* Helper macro for logging */
+#define rte_bbdev_log(level, fmt, ...) \
+	rte_log(RTE_LOG_ ## level, acc101_logtype, fmt "\n", \
+		##__VA_ARGS__)
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+#define rte_bbdev_log_debug(fmt, ...) \
+		rte_bbdev_log(DEBUG, "acc101_pmd: " fmt, \
+		##__VA_ARGS__)
+#else
+#define rte_bbdev_log_debug(fmt, ...)
+#endif
+
+/* ACC101 PF and VF driver names */
+#define ACC101PF_DRIVER_NAME           intel_acc101_pf
+#define ACC101VF_DRIVER_NAME           intel_acc101_vf
+
+/* ACC101 PCI vendor & device IDs */
+#define RTE_ACC101_VENDOR_ID           (0x8086)
+#define RTE_ACC101_PF_DEVICE_ID        (0x57c4)
+#define RTE_ACC101_VF_DEVICE_ID        (0x57c5)
+
+/* Private data structure for each ACC101 device */
+struct acc101_device {
+	void *mmio_base;  /**< Base address of MMIO registers (BAR0) */
+	void *sw_rings_base;  /* Base addr of un-aligned memory for sw rings */
+	void *sw_rings;  /* 64MBs of 64MB aligned memory for sw rings */
+	rte_iova_t sw_rings_iova;  /* IOVA address of sw_rings */
+
+	union acc101_harq_layout_data *harq_layout;
+	/* Number of bytes available for each queue in device, depending on
+	 * how many queues are enabled with configure()
+	 */
+	uint32_t sw_ring_size;
+	uint32_t ddr_size; /* Size in kB */
+	uint32_t *tail_ptrs; /* Base address of response tail pointer buffer */
+	rte_iova_t tail_ptr_iova; /* IOVA address of tail pointers */
+	/* Max number of entries available for each queue in device, depending
+	 * on how many queues are enabled with configure()
+	 */
+	uint32_t sw_ring_max_depth;
+	bool pf_device; /**< True if this is a PF ACC101 device */
+	bool configured; /**< True if this ACC101 device is configured */
+};
+
+#endif /* _RTE_ACC101_PMD_H_ */
diff --git a/drivers/baseband/acc101/version.map b/drivers/baseband/acc101/version.map
new file mode 100644
index 0000000..c2e0723
--- /dev/null
+++ b/drivers/baseband/acc101/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
diff --git a/drivers/baseband/meson.build b/drivers/baseband/meson.build
index 686e98b..67c7d22 100644
--- a/drivers/baseband/meson.build
+++ b/drivers/baseband/meson.build
@@ -7,6 +7,7 @@ endif
 
 drivers = [
         'acc100',
+        'acc101',
         'fpga_5gnr_fec',
         'fpga_lte_fec',
         'la12xx',
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 2/9] baseband/acc101: add HW register definition
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 1/9] baseband/acc101: introduce PMD for ACC101 Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 3/9] baseband/acc101: add info get function Nicolas Chautru
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Add list of registers for the device and related HW
specifications definitions.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 drivers/baseband/acc101/acc101_pf_enum.h | 1128 ++++++++++++++++++++++++++++++
 drivers/baseband/acc101/acc101_vf_enum.h |   79 +++
 drivers/baseband/acc101/rte_acc101_pmd.h |  453 ++++++++++++
 3 files changed, 1660 insertions(+)
 create mode 100644 drivers/baseband/acc101/acc101_pf_enum.h
 create mode 100644 drivers/baseband/acc101/acc101_vf_enum.h

diff --git a/drivers/baseband/acc101/acc101_pf_enum.h b/drivers/baseband/acc101/acc101_pf_enum.h
new file mode 100644
index 0000000..55958e2
--- /dev/null
+++ b/drivers/baseband/acc101/acc101_pf_enum.h
@@ -0,0 +1,1128 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef ACC101_PF_ENUM_H
+#define ACC101_PF_ENUM_H
+
+/*
+ * ACC101 Register mapping on PF BAR0
+ * This is automatically generated from RDL, format may change with new RDL
+ * Release.
+ * Variable names are as is
+ */
+enum {
+	HWPfQmgrEgressQueuesTemplate          =  0x0007FE00,
+	HWPfQmgrIngressAq                     =  0x00080000,
+	HWPfAramStatus                        =  0x00810000,
+	HWPfQmgrArbQAvail                     =  0x00A00010,
+	HWPfQmgrArbQBlock                     =  0x00A00014,
+	HWPfQmgrAqueueDropNotifEn             =  0x00A00024,
+	HWPfQmgrAqueueDisableNotifEn          =  0x00A00028,
+	HWPfQmgrSoftReset                     =  0x00A00038,
+	HWPfQmgrInitStatus                    =  0x00A0003C,
+	HWPfQmgrAramWatchdogCount             =  0x00A00040,
+	HWPfQmgrAramWatchdogCounterEn         =  0x00A00044,
+	HWPfQmgrAxiWatchdogCount              =  0x00A00048,
+	HWPfQmgrAxiWatchdogCounterEn          =  0x00A0004C,
+	HWPfQmgrProcessWatchdogCount          =  0x00A00050,
+	HWPfQmgrProcessWatchdogCounterEn      =  0x00A00054,
+	HWPfQmgrProcessUl4GWatchdogCounter    =  0x00A00058,
+	HWPfQmgrProcessDl4GWatchdogCounter    =  0x00A0005C,
+	HWPfQmgrProcessUl5GWatchdogCounter    =  0x00A00060,
+	HWPfQmgrProcessDl5GWatchdogCounter    =  0x00A00064,
+	HWPfQmgrProcessMldWatchdogCounter     =  0x00A00068,
+	HWPfQmgrMsiOverflowUpperVf            =  0x00A00070,
+	HWPfQmgrMsiOverflowLowerVf            =  0x00A00074,
+	HWPfQmgrMsiWatchdogOverflow           =  0x00A00078,
+	HWPfQmgrMsiOverflowEnable             =  0x00A0007C,
+	HWPfQmgrDebugAqPointerMemGrp          =  0x00A00100,
+	HWPfQmgrDebugOutputArbQFifoGrp        =  0x00A00140,
+	HWPfQmgrDebugMsiFifoGrp               =  0x00A00180,
+	HWPfQmgrDebugAxiWdTimeoutMsiFifo      =  0x00A001C0,
+	HWPfQmgrDebugProcessWdTimeoutMsiFifo  =  0x00A001C4,
+	HWPfQmgrDepthLog2Grp                  =  0x00A00200,
+	HWPfQmgrTholdGrp                      =  0x00A00300,
+	HWPfQmgrGrpTmplateReg0Indx            =  0x00A00600,
+	HWPfQmgrGrpTmplateReg1Indx            =  0x00A00680,
+	HWPfQmgrGrpTmplateReg2indx            =  0x00A00700,
+	HWPfQmgrGrpTmplateReg3Indx            =  0x00A00780,
+	HWPfQmgrGrpTmplateReg4Indx            =  0x00A00800,
+	HWPfQmgrVfBaseAddr                    =  0x00A01000,
+	HWPfQmgrUl4GWeightRrVf                =  0x00A02000,
+	HWPfQmgrDl4GWeightRrVf                =  0x00A02100,
+	HWPfQmgrUl5GWeightRrVf                =  0x00A02200,
+	HWPfQmgrDl5GWeightRrVf                =  0x00A02300,
+	HWPfQmgrMldWeightRrVf                 =  0x00A02400,
+	HWPfQmgrArbQDepthGrp                  =  0x00A02F00,
+	HWPfQmgrGrpFunction0                  =  0x00A02F40,
+	HWPfQmgrGrpFunction1                  =  0x00A02F44,
+	HWPfQmgrGrpPriority                   =  0x00A02F48,
+	HWPfQmgrWeightSync                    =  0x00A03000,
+	HWPfQmgrAqEnableVf                    =  0x00A10000,
+	HWPfQmgrAqResetVf                     =  0x00A20000,
+	HWPfQmgrRingSizeVf                    =  0x00A20004,
+	HWPfQmgrGrpDepthLog20Vf               =  0x00A20008,
+	HWPfQmgrGrpDepthLog21Vf               =  0x00A2000C,
+	HWPfQmgrGrpFunction0Vf                =  0x00A20010,
+	HWPfQmgrGrpFunction1Vf                =  0x00A20014,
+	HWPfDmaConfig0Reg                     =  0x00B80000,
+	HWPfDmaConfig1Reg                     =  0x00B80004,
+	HWPfDmaQmgrAddrReg                    =  0x00B80008,
+	HWPfDmaSoftResetReg                   =  0x00B8000C,
+	HWPfDmaAxcacheReg                     =  0x00B80010,
+	HWPfDmaVersionReg                     =  0x00B80014,
+	HWPfDmaFrameThreshold                 =  0x00B80018,
+	HWPfDmaTimestampLo                    =  0x00B8001C,
+	HWPfDmaTimestampHi                    =  0x00B80020,
+	HWPfDmaAxiStatus                      =  0x00B80028,
+	HWPfDmaAxiControl                     =  0x00B8002C,
+	HWPfDmaNoQmgr                         =  0x00B80030,
+	HWPfDmaQosScale                       =  0x00B80034,
+	HWPfDmaQmanen                         =  0x00B80040,
+	HWPfDmaQmgrQosBase                    =  0x00B80060,
+	HWPfDmaFecClkGatingEnable             =  0x00B80080,
+	HWPfDmaPmEnable                       =  0x00B80084,
+	HWPfDmaQosEnable                      =  0x00B80088,
+	HWPfDmaHarqWeightedRrFrameThreshold   =  0x00B800B0,
+	HWPfDmaDataSmallWeightedRrFrameThresh  = 0x00B800B4,
+	HWPfDmaDataLargeWeightedRrFrameThresh  = 0x00B800B8,
+	HWPfDmaInboundCbMaxSize               =  0x00B800BC,
+	HWPfDmaInboundDrainDataSize           =  0x00B800C0,
+	HWPfDmaVfDdrBaseRw                    =  0x00B80400,
+	HWPfDmaCmplTmOutCnt                   =  0x00B80800,
+	HWPfDmaProcTmOutCnt                   =  0x00B80804,
+	HWPfDmaStatusRrespBresp               =  0x00B80810,
+	HWPfDmaCfgRrespBresp                  =  0x00B80814,
+	HWPfDmaStatusMemParErr                =  0x00B80818,
+	HWPfDmaCfgMemParErrEn                 =  0x00B8081C,
+	HWPfDmaStatusDmaHwErr                 =  0x00B80820,
+	HWPfDmaCfgDmaHwErrEn                  =  0x00B80824,
+	HWPfDmaStatusFecCoreErr               =  0x00B80828,
+	HWPfDmaCfgFecCoreErrEn                =  0x00B8082C,
+	HWPfDmaStatusFcwDescrErr              =  0x00B80830,
+	HWPfDmaCfgFcwDescrErrEn               =  0x00B80834,
+	HWPfDmaStatusBlockTransmit            =  0x00B80838,
+	HWPfDmaBlockOnErrEn                   =  0x00B8083C,
+	HWPfDmaStatusFlushDma                 =  0x00B80840,
+	HWPfDmaFlushDmaOnErrEn                =  0x00B80844,
+	HWPfDmaStatusSdoneFifoFull            =  0x00B80848,
+	HWPfDmaStatusDescriptorErrLoVf        =  0x00B8084C,
+	HWPfDmaStatusDescriptorErrHiVf        =  0x00B80850,
+	HWPfDmaStatusFcwErrLoVf               =  0x00B80854,
+	HWPfDmaStatusFcwErrHiVf               =  0x00B80858,
+	HWPfDmaStatusDataErrLoVf              =  0x00B8085C,
+	HWPfDmaStatusDataErrHiVf              =  0x00B80860,
+	HWPfDmaCfgMsiEnSoftwareErr            =  0x00B80864,
+	HWPfDmaDescriptorSignatuture          =  0x00B80868,
+	HWPfDmaFcwSignature                   =  0x00B8086C,
+	HWPfDmaErrorDetectionEn               =  0x00B80870,
+	HWPfDmaErrCntrlFifoDebug              =  0x00B8087C,
+	HWPfDmaStatusToutData                 =  0x00B80880,
+	HWPfDmaStatusToutDesc                 =  0x00B80884,
+	HWPfDmaStatusToutUnexpData            =  0x00B80888,
+	HWPfDmaStatusToutUnexpDesc            =  0x00B8088C,
+	HWPfDmaStatusToutProcess              =  0x00B80890,
+	HWPfDmaConfigCtoutOutDataEn           =  0x00B808A0,
+	HWPfDmaConfigCtoutOutDescrEn          =  0x00B808A4,
+	HWPfDmaConfigUnexpComplDataEn         =  0x00B808A8,
+	HWPfDmaConfigUnexpComplDescrEn        =  0x00B808AC,
+	HWPfDmaConfigPtoutOutEn               =  0x00B808B0,
+	HWPfDmaClusterHangCtrl                =  0x00B80E00,
+	HWPfDmaClusterHangThld                =  0x00B80E04,
+	HWPfDmaStopAxiThreshold               =  0x00B80F3C,
+	HWPfDmaFec5GulDescBaseLoRegVf         =  0x00B88020,
+	HWPfDmaFec5GulDescBaseHiRegVf         =  0x00B88024,
+	HWPfDmaFec5GulRespPtrLoRegVf          =  0x00B88028,
+	HWPfDmaFec5GulRespPtrHiRegVf          =  0x00B8802C,
+	HWPfDmaFec5GdlDescBaseLoRegVf         =  0x00B88040,
+	HWPfDmaFec5GdlDescBaseHiRegVf         =  0x00B88044,
+	HWPfDmaFec5GdlRespPtrLoRegVf          =  0x00B88048,
+	HWPfDmaFec5GdlRespPtrHiRegVf          =  0x00B8804C,
+	HWPfDmaFec4GulDescBaseLoRegVf         =  0x00B88060,
+	HWPfDmaFec4GulDescBaseHiRegVf         =  0x00B88064,
+	HWPfDmaFec4GulRespPtrLoRegVf          =  0x00B88068,
+	HWPfDmaFec4GulRespPtrHiRegVf          =  0x00B8806C,
+	HWPfDmaFec4GdlDescBaseLoRegVf         =  0x00B88080,
+	HWPfDmaFec4GdlDescBaseHiRegVf         =  0x00B88084,
+	HWPfDmaFec4GdlRespPtrLoRegVf          =  0x00B88088,
+	HWPfDmaFec4GdlRespPtrHiRegVf          =  0x00B8808C,
+	HWPfDmaVfDdrBaseRangeRo               =  0x00B880A0,
+	HWPfQosmonACntrlReg                   =  0x00B90000,
+	HWPfQosmonAEvalOverflow0              =  0x00B90008,
+	HWPfQosmonAEvalOverflow1              =  0x00B9000C,
+	HWPfQosmonADivTerm                    =  0x00B90010,
+	HWPfQosmonATickTerm                   =  0x00B90014,
+	HWPfQosmonAEvalTerm                   =  0x00B90018,
+	HWPfQosmonAAveTerm                    =  0x00B9001C,
+	HWPfQosmonAForceEccErr                =  0x00B90020,
+	HWPfQosmonAEccErrDetect               =  0x00B90024,
+	HWPfQosmonAIterationConfig0Low        =  0x00B90060,
+	HWPfQosmonAIterationConfig0High       =  0x00B90064,
+	HWPfQosmonAIterationConfig1Low        =  0x00B90068,
+	HWPfQosmonAIterationConfig1High       =  0x00B9006C,
+	HWPfQosmonAIterationConfig2Low        =  0x00B90070,
+	HWPfQosmonAIterationConfig2High       =  0x00B90074,
+	HWPfQosmonAIterationConfig3Low        =  0x00B90078,
+	HWPfQosmonAIterationConfig3High       =  0x00B9007C,
+	HWPfQosmonAEvalMemAddr                =  0x00B90080,
+	HWPfQosmonAEvalMemData                =  0x00B90084,
+	HWPfQosmonAXaction                    =  0x00B900C0,
+	HWPfQosmonARemThres1Vf                =  0x00B90400,
+	HWPfQosmonAThres2Vf                   =  0x00B90404,
+	HWPfQosmonAWeiFracVf                  =  0x00B90408,
+	HWPfQosmonARrWeiVf                    =  0x00B9040C,
+	HWPfPermonACntrlRegVf                 =  0x00B98000,
+	HWPfPermonACountVf                    =  0x00B98008,
+	HWPfPermonAKCntLoVf                   =  0x00B98010,
+	HWPfPermonAKCntHiVf                   =  0x00B98014,
+	HWPfPermonADeltaCntLoVf               =  0x00B98020,
+	HWPfPermonADeltaCntHiVf               =  0x00B98024,
+	HWPfPermonAVersionReg                 =  0x00B9C000,
+	HWPfPermonACbControlFec               =  0x00B9C0F0,
+	HWPfPermonADltTimerLoFec              =  0x00B9C0F4,
+	HWPfPermonADltTimerHiFec              =  0x00B9C0F8,
+	HWPfPermonACbCountFec                 =  0x00B9C100,
+	HWPfPermonAAccExecTimerLoFec          =  0x00B9C104,
+	HWPfPermonAAccExecTimerHiFec          =  0x00B9C108,
+	HWPfPermonAExecTimerMinFec            =  0x00B9C200,
+	HWPfPermonAExecTimerMaxFec            =  0x00B9C204,
+	HWPfPermonAControlBusMon              =  0x00B9C400,
+	HWPfPermonAConfigBusMon               =  0x00B9C404,
+	HWPfPermonASkipCountBusMon            =  0x00B9C408,
+	HWPfPermonAMinLatBusMon               =  0x00B9C40C,
+	HWPfPermonAMaxLatBusMon               =  0x00B9C500,
+	HWPfPermonATotalLatLowBusMon          =  0x00B9C504,
+	HWPfPermonATotalLatUpperBusMon        =  0x00B9C508,
+	HWPfPermonATotalReqCntBusMon          =  0x00B9C50C,
+	HWPfQosmonBCntrlReg                   =  0x00BA0000,
+	HWPfQosmonBEvalOverflow0              =  0x00BA0008,
+	HWPfQosmonBEvalOverflow1              =  0x00BA000C,
+	HWPfQosmonBDivTerm                    =  0x00BA0010,
+	HWPfQosmonBTickTerm                   =  0x00BA0014,
+	HWPfQosmonBEvalTerm                   =  0x00BA0018,
+	HWPfQosmonBAveTerm                    =  0x00BA001C,
+	HWPfQosmonBForceEccErr                =  0x00BA0020,
+	HWPfQosmonBEccErrDetect               =  0x00BA0024,
+	HWPfQosmonBIterationConfig0Low        =  0x00BA0060,
+	HWPfQosmonBIterationConfig0High       =  0x00BA0064,
+	HWPfQosmonBIterationConfig1Low        =  0x00BA0068,
+	HWPfQosmonBIterationConfig1High       =  0x00BA006C,
+	HWPfQosmonBIterationConfig2Low        =  0x00BA0070,
+	HWPfQosmonBIterationConfig2High       =  0x00BA0074,
+	HWPfQosmonBIterationConfig3Low        =  0x00BA0078,
+	HWPfQosmonBIterationConfig3High       =  0x00BA007C,
+	HWPfQosmonBEvalMemAddr                =  0x00BA0080,
+	HWPfQosmonBEvalMemData                =  0x00BA0084,
+	HWPfQosmonBXaction                    =  0x00BA00C0,
+	HWPfQosmonBRemThres1Vf                =  0x00BA0400,
+	HWPfQosmonBThres2Vf                   =  0x00BA0404,
+	HWPfQosmonBWeiFracVf                  =  0x00BA0408,
+	HWPfQosmonBRrWeiVf                    =  0x00BA040C,
+	HWPfPermonBCntrlRegVf                 =  0x00BA8000,
+	HWPfPermonBCountVf                    =  0x00BA8008,
+	HWPfPermonBKCntLoVf                   =  0x00BA8010,
+	HWPfPermonBKCntHiVf                   =  0x00BA8014,
+	HWPfPermonBDeltaCntLoVf               =  0x00BA8020,
+	HWPfPermonBDeltaCntHiVf               =  0x00BA8024,
+	HWPfPermonBVersionReg                 =  0x00BAC000,
+	HWPfPermonBCbControlFec               =  0x00BAC0F0,
+	HWPfPermonBDltTimerLoFec              =  0x00BAC0F4,
+	HWPfPermonBDltTimerHiFec              =  0x00BAC0F8,
+	HWPfPermonBCbCountFec                 =  0x00BAC100,
+	HWPfPermonBAccExecTimerLoFec          =  0x00BAC104,
+	HWPfPermonBAccExecTimerHiFec          =  0x00BAC108,
+	HWPfPermonBExecTimerMinFec            =  0x00BAC200,
+	HWPfPermonBExecTimerMaxFec            =  0x00BAC204,
+	HWPfPermonBControlBusMon              =  0x00BAC400,
+	HWPfPermonBConfigBusMon               =  0x00BAC404,
+	HWPfPermonBSkipCountBusMon            =  0x00BAC408,
+	HWPfPermonBMinLatBusMon               =  0x00BAC40C,
+	HWPfPermonBMaxLatBusMon               =  0x00BAC500,
+	HWPfPermonBTotalLatLowBusMon          =  0x00BAC504,
+	HWPfPermonBTotalLatUpperBusMon        =  0x00BAC508,
+	HWPfPermonBTotalReqCntBusMon          =  0x00BAC50C,
+	HwPfFabI2MArbCntrlReg                 =  0x00BB0000,
+	HWPfFabricMode                        =  0x00BB1000,
+	HwPfFabI2MGrp0DebugReg                =  0x00BBF000,
+	HwPfFabI2MGrp1DebugReg                =  0x00BBF004,
+	HwPfFabI2MGrp2DebugReg                =  0x00BBF008,
+	HwPfFabI2MGrp3DebugReg                =  0x00BBF00C,
+	HwPfFabI2MBuf0DebugReg                =  0x00BBF010,
+	HwPfFabI2MBuf1DebugReg                =  0x00BBF014,
+	HwPfFabI2MBuf2DebugReg                =  0x00BBF018,
+	HwPfFabI2MBuf3DebugReg                =  0x00BBF01C,
+	HwPfFabM2IBuf0Grp0DebugReg            =  0x00BBF020,
+	HwPfFabM2IBuf1Grp0DebugReg            =  0x00BBF024,
+	HwPfFabM2IBuf0Grp1DebugReg            =  0x00BBF028,
+	HwPfFabM2IBuf1Grp1DebugReg            =  0x00BBF02C,
+	HwPfFabM2IBuf0Grp2DebugReg            =  0x00BBF030,
+	HwPfFabM2IBuf1Grp2DebugReg            =  0x00BBF034,
+	HwPfFabM2IBuf0Grp3DebugReg            =  0x00BBF038,
+	HwPfFabM2IBuf1Grp3DebugReg            =  0x00BBF03C,
+	HWPfFecUl5gCntrlReg                   =  0x00BC0000,
+	HWPfFecUl5gI2MThreshReg               =  0x00BC0004,
+	HWPfFecUl5gVersionReg                 =  0x00BC0100,
+	HWPfFecUl5gFcwStatusReg               =  0x00BC0104,
+	HWPfFecUl5gWarnReg                    =  0x00BC0108,
+	HwPfFecUl5gIbDebugReg                 =  0x00BC0200,
+	HwPfFecUl5gObLlrDebugReg              =  0x00BC0204,
+	HwPfFecUl5gObHarqDebugReg             =  0x00BC0208,
+	HwPfFecUl5g1CntrlReg                  =  0x00BC1000,
+	HwPfFecUl5g1I2MThreshReg              =  0x00BC1004,
+	HwPfFecUl5g1VersionReg                =  0x00BC1100,
+	HwPfFecUl5g1FcwStatusReg              =  0x00BC1104,
+	HwPfFecUl5g1WarnReg                   =  0x00BC1108,
+	HwPfFecUl5g1IbDebugReg                =  0x00BC1200,
+	HwPfFecUl5g1ObLlrDebugReg             =  0x00BC1204,
+	HwPfFecUl5g1ObHarqDebugReg            =  0x00BC1208,
+	HwPfFecUl5g2CntrlReg                  =  0x00BC2000,
+	HwPfFecUl5g2I2MThreshReg              =  0x00BC2004,
+	HwPfFecUl5g2VersionReg                =  0x00BC2100,
+	HwPfFecUl5g2FcwStatusReg              =  0x00BC2104,
+	HwPfFecUl5g2WarnReg                   =  0x00BC2108,
+	HwPfFecUl5g2IbDebugReg                =  0x00BC2200,
+	HwPfFecUl5g2ObLlrDebugReg             =  0x00BC2204,
+	HwPfFecUl5g2ObHarqDebugReg            =  0x00BC2208,
+	HwPfFecUl5g3CntrlReg                  =  0x00BC3000,
+	HwPfFecUl5g3I2MThreshReg              =  0x00BC3004,
+	HwPfFecUl5g3VersionReg                =  0x00BC3100,
+	HwPfFecUl5g3FcwStatusReg              =  0x00BC3104,
+	HwPfFecUl5g3WarnReg                   =  0x00BC3108,
+	HwPfFecUl5g3IbDebugReg                =  0x00BC3200,
+	HwPfFecUl5g3ObLlrDebugReg             =  0x00BC3204,
+	HwPfFecUl5g3ObHarqDebugReg            =  0x00BC3208,
+	HwPfFecUl5g4CntrlReg                  =  0x00BC4000,
+	HwPfFecUl5g4I2MThreshReg              =  0x00BC4004,
+	HwPfFecUl5g4VersionReg                =  0x00BC4100,
+	HwPfFecUl5g4FcwStatusReg              =  0x00BC4104,
+	HwPfFecUl5g4WarnReg                   =  0x00BC4108,
+	HwPfFecUl5g4IbDebugReg                =  0x00BC4200,
+	HwPfFecUl5g4ObLlrDebugReg             =  0x00BC4204,
+	HwPfFecUl5g4ObHarqDebugReg            =  0x00BC4208,
+	HwPfFecUl5g5CntrlReg                  =  0x00BC5000,
+	HwPfFecUl5g5I2MThreshReg              =  0x00BC5004,
+	HwPfFecUl5g5VersionReg                =  0x00BC5100,
+	HwPfFecUl5g5FcwStatusReg              =  0x00BC5104,
+	HwPfFecUl5g5WarnReg                   =  0x00BC5108,
+	HwPfFecUl5g5IbDebugReg                =  0x00BC5200,
+	HwPfFecUl5g5ObLlrDebugReg             =  0x00BC5204,
+	HwPfFecUl5g5ObHarqDebugReg            =  0x00BC5208,
+	HwPfFecUl5g6CntrlReg                  =  0x00BC6000,
+	HwPfFecUl5g6I2MThreshReg              =  0x00BC6004,
+	HwPfFecUl5g6VersionReg                =  0x00BC6100,
+	HwPfFecUl5g6FcwStatusReg              =  0x00BC6104,
+	HwPfFecUl5g6WarnReg                   =  0x00BC6108,
+	HwPfFecUl5g6IbDebugReg                =  0x00BC6200,
+	HwPfFecUl5g6ObLlrDebugReg             =  0x00BC6204,
+	HwPfFecUl5g6ObHarqDebugReg            =  0x00BC6208,
+	HwPfFecUl5g7CntrlReg                  =  0x00BC7000,
+	HwPfFecUl5g7I2MThreshReg              =  0x00BC7004,
+	HwPfFecUl5g7VersionReg                =  0x00BC7100,
+	HwPfFecUl5g7FcwStatusReg              =  0x00BC7104,
+	HwPfFecUl5g7WarnReg                   =  0x00BC7108,
+	HwPfFecUl5g7IbDebugReg                =  0x00BC7200,
+	HwPfFecUl5g7ObLlrDebugReg             =  0x00BC7204,
+	HwPfFecUl5g7ObHarqDebugReg            =  0x00BC7208,
+	HwPfFecUl5g8CntrlReg                  =  0x00BC8000,
+	HwPfFecUl5g8I2MThreshReg              =  0x00BC8004,
+	HwPfFecUl5g8VersionReg                =  0x00BC8100,
+	HwPfFecUl5g8FcwStatusReg              =  0x00BC8104,
+	HwPfFecUl5g8WarnReg                   =  0x00BC8108,
+	HwPfFecUl5g8IbDebugReg                =  0x00BC8200,
+	HwPfFecUl5g8ObLlrDebugReg             =  0x00BC8204,
+	HwPfFecUl5g8ObHarqDebugReg            =  0x00BC8208,
+	HwPfFecDl5g0CntrlReg                  =  0x00BCD000,
+	HwPfFecDl5g0I2MThreshReg              =  0x00BCD004,
+	HwPfFecDl5g0VersionReg                =  0x00BCD100,
+	HwPfFecDl5g0FcwStatusReg              =  0x00BCD104,
+	HwPfFecDl5g0WarnReg                   =  0x00BCD108,
+	HwPfFecDl5g0IbDebugReg                =  0x00BCD200,
+	HwPfFecDl5g0ObDebugReg                =  0x00BCD204,
+	HwPfFecDl5g1CntrlReg                  =  0x00BCE000,
+	HwPfFecDl5g1I2MThreshReg              =  0x00BCE004,
+	HwPfFecDl5g1VersionReg                =  0x00BCE100,
+	HwPfFecDl5g1FcwStatusReg              =  0x00BCE104,
+	HwPfFecDl5g1WarnReg                   =  0x00BCE108,
+	HwPfFecDl5g1IbDebugReg                =  0x00BCE200,
+	HwPfFecDl5g1ObDebugReg                =  0x00BCE204,
+	HwPfFecDl5g2CntrlReg                  =  0x00BCF000,
+	HwPfFecDl5g2I2MThreshReg              =  0x00BCF004,
+	HwPfFecDl5g2VersionReg                =  0x00BCF100,
+	HwPfFecDl5g2FcwStatusReg              =  0x00BCF104,
+	HwPfFecDl5g2WarnReg                   =  0x00BCF108,
+	HwPfFecDl5g2IbDebugReg                =  0x00BCF200,
+	HwPfFecDl5g2ObDebugReg                =  0x00BCF204,
+	HWPfFecUlVersionReg                   =  0x00BD0000,
+	HWPfFecUlControlReg                   =  0x00BD0004,
+	HWPfFecUlStatusReg                    =  0x00BD0008,
+	HWPfFecDlVersionReg                   =  0x00BDF000,
+	HWPfFecDlClusterConfigReg             =  0x00BDF004,
+	HWPfFecDlBurstThres                   =  0x00BDF00C,
+	HWPfFecDlClusterStatusReg0            =  0x00BDF040,
+	HWPfFecDlClusterStatusReg1            =  0x00BDF044,
+	HWPfFecDlClusterStatusReg2            =  0x00BDF048,
+	HWPfFecDlClusterStatusReg3            =  0x00BDF04C,
+	HWPfFecDlClusterStatusReg4            =  0x00BDF050,
+	HWPfFecDlClusterStatusReg5            =  0x00BDF054,
+	HwPfWbbThreshold                      =  0x00C20000,
+	HwPfWbbSpare                          =  0x00C20004,
+	HwPfWbbDebugCtl                       =  0x00C20010,
+	HwPfWbbDebug                          =  0x00C20014,
+	HwPfWbbError                          =  0x00C20020,
+	HwPfWbbErrorInjecti                   =  0x00C20024,
+	HWPfChaFabPllPllrst                   =  0x00C40000,
+	HWPfChaFabPllClk0                     =  0x00C40004,
+	HWPfChaFabPllClk1                     =  0x00C40008,
+	HWPfChaFabPllBwadj                    =  0x00C4000C,
+	HWPfChaFabPllLbw                      =  0x00C40010,
+	HWPfChaFabPllResetq                   =  0x00C40014,
+	HWPfChaFabPllPhshft0                  =  0x00C40018,
+	HWPfChaFabPllPhshft1                  =  0x00C4001C,
+	HWPfChaFabPllDivq0                    =  0x00C40020,
+	HWPfChaFabPllDivq1                    =  0x00C40024,
+	HWPfChaFabPllDivq2                    =  0x00C40028,
+	HWPfChaFabPllDivq3                    =  0x00C4002C,
+	HWPfChaFabPllDivq4                    =  0x00C40030,
+	HWPfChaFabPllDivq5                    =  0x00C40034,
+	HWPfChaFabPllDivq6                    =  0x00C40038,
+	HWPfChaFabPllDivq7                    =  0x00C4003C,
+	HWPfChaDl5gPllPllrst                  =  0x00C40080,
+	HWPfChaDl5gPllClk0                    =  0x00C40084,
+	HWPfChaDl5gPllClk1                    =  0x00C40088,
+	HWPfChaDl5gPllBwadj                   =  0x00C4008C,
+	HWPfChaDl5gPllLbw                     =  0x00C40090,
+	HWPfChaDl5gPllResetq                  =  0x00C40094,
+	HWPfChaDl5gPllPhshft0                 =  0x00C40098,
+	HWPfChaDl5gPllPhshft1                 =  0x00C4009C,
+	HWPfChaDl5gPllDivq0                   =  0x00C400A0,
+	HWPfChaDl5gPllDivq1                   =  0x00C400A4,
+	HWPfChaDl5gPllDivq2                   =  0x00C400A8,
+	HWPfChaDl5gPllDivq3                   =  0x00C400AC,
+	HWPfChaDl5gPllDivq4                   =  0x00C400B0,
+	HWPfChaDl5gPllDivq5                   =  0x00C400B4,
+	HWPfChaDl5gPllDivq6                   =  0x00C400B8,
+	HWPfChaDl5gPllDivq7                   =  0x00C400BC,
+	HWPfChaDl4gPllPllrst                  =  0x00C40100,
+	HWPfChaDl4gPllClk0                    =  0x00C40104,
+	HWPfChaDl4gPllClk1                    =  0x00C40108,
+	HWPfChaDl4gPllBwadj                   =  0x00C4010C,
+	HWPfChaDl4gPllLbw                     =  0x00C40110,
+	HWPfChaDl4gPllResetq                  =  0x00C40114,
+	HWPfChaDl4gPllPhshft0                 =  0x00C40118,
+	HWPfChaDl4gPllPhshft1                 =  0x00C4011C,
+	HWPfChaDl4gPllDivq0                   =  0x00C40120,
+	HWPfChaDl4gPllDivq1                   =  0x00C40124,
+	HWPfChaDl4gPllDivq2                   =  0x00C40128,
+	HWPfChaDl4gPllDivq3                   =  0x00C4012C,
+	HWPfChaDl4gPllDivq4                   =  0x00C40130,
+	HWPfChaDl4gPllDivq5                   =  0x00C40134,
+	HWPfChaDl4gPllDivq6                   =  0x00C40138,
+	HWPfChaDl4gPllDivq7                   =  0x00C4013C,
+	HWPfChaUl5gPllPllrst                  =  0x00C40180,
+	HWPfChaUl5gPllClk0                    =  0x00C40184,
+	HWPfChaUl5gPllClk1                    =  0x00C40188,
+	HWPfChaUl5gPllBwadj                   =  0x00C4018C,
+	HWPfChaUl5gPllLbw                     =  0x00C40190,
+	HWPfChaUl5gPllResetq                  =  0x00C40194,
+	HWPfChaUl5gPllPhshft0                 =  0x00C40198,
+	HWPfChaUl5gPllPhshft1                 =  0x00C4019C,
+	HWPfChaUl5gPllDivq0                   =  0x00C401A0,
+	HWPfChaUl5gPllDivq1                   =  0x00C401A4,
+	HWPfChaUl5gPllDivq2                   =  0x00C401A8,
+	HWPfChaUl5gPllDivq3                   =  0x00C401AC,
+	HWPfChaUl5gPllDivq4                   =  0x00C401B0,
+	HWPfChaUl5gPllDivq5                   =  0x00C401B4,
+	HWPfChaUl5gPllDivq6                   =  0x00C401B8,
+	HWPfChaUl5gPllDivq7                   =  0x00C401BC,
+	HWPfChaUl4gPllPllrst                  =  0x00C40200,
+	HWPfChaUl4gPllClk0                    =  0x00C40204,
+	HWPfChaUl4gPllClk1                    =  0x00C40208,
+	HWPfChaUl4gPllBwadj                   =  0x00C4020C,
+	HWPfChaUl4gPllLbw                     =  0x00C40210,
+	HWPfChaUl4gPllResetq                  =  0x00C40214,
+	HWPfChaUl4gPllPhshft0                 =  0x00C40218,
+	HWPfChaUl4gPllPhshft1                 =  0x00C4021C,
+	HWPfChaUl4gPllDivq0                   =  0x00C40220,
+	HWPfChaUl4gPllDivq1                   =  0x00C40224,
+	HWPfChaUl4gPllDivq2                   =  0x00C40228,
+	HWPfChaUl4gPllDivq3                   =  0x00C4022C,
+	HWPfChaUl4gPllDivq4                   =  0x00C40230,
+	HWPfChaUl4gPllDivq5                   =  0x00C40234,
+	HWPfChaUl4gPllDivq6                   =  0x00C40238,
+	HWPfChaUl4gPllDivq7                   =  0x00C4023C,
+	HWPfChaDdrPllPllrst                   =  0x00C40280,
+	HWPfChaDdrPllClk0                     =  0x00C40284,
+	HWPfChaDdrPllClk1                     =  0x00C40288,
+	HWPfChaDdrPllBwadj                    =  0x00C4028C,
+	HWPfChaDdrPllLbw                      =  0x00C40290,
+	HWPfChaDdrPllResetq                   =  0x00C40294,
+	HWPfChaDdrPllPhshft0                  =  0x00C40298,
+	HWPfChaDdrPllPhshft1                  =  0x00C4029C,
+	HWPfChaDdrPllDivq0                    =  0x00C402A0,
+	HWPfChaDdrPllDivq1                    =  0x00C402A4,
+	HWPfChaDdrPllDivq2                    =  0x00C402A8,
+	HWPfChaDdrPllDivq3                    =  0x00C402AC,
+	HWPfChaDdrPllDivq4                    =  0x00C402B0,
+	HWPfChaDdrPllDivq5                    =  0x00C402B4,
+	HWPfChaDdrPllDivq6                    =  0x00C402B8,
+	HWPfChaDdrPllDivq7                    =  0x00C402BC,
+	HWPfChaErrStatus                      =  0x00C40400,
+	HWPfChaErrMask                        =  0x00C40404,
+	HWPfChaDebugPcieMsiFifo               =  0x00C40410,
+	HWPfChaDebugDdrMsiFifo                =  0x00C40414,
+	HWPfChaDebugMiscMsiFifo               =  0x00C40418,
+	HWPfChaPwmSet                         =  0x00C40420,
+	HWPfChaDdrRstStatus                   =  0x00C40430,
+	HWPfChaDdrStDoneStatus                =  0x00C40434,
+	HWPfChaDdrWbRstCfg                    =  0x00C40438,
+	HWPfChaDdrApbRstCfg                   =  0x00C4043C,
+	HWPfChaDdrPhyRstCfg                   =  0x00C40440,
+	HWPfChaDdrCpuRstCfg                   =  0x00C40444,
+	HWPfChaDdrSifRstCfg                   =  0x00C40448,
+	HWPfChaPadcfgPcomp0                   =  0x00C41000,
+	HWPfChaPadcfgNcomp0                   =  0x00C41004,
+	HWPfChaPadcfgOdt0                     =  0x00C41008,
+	HWPfChaPadcfgProtect0                 =  0x00C4100C,
+	HWPfChaPreemphasisProtect0            =  0x00C41010,
+	HWPfChaPreemphasisCompen0             =  0x00C41040,
+	HWPfChaPreemphasisOdten0              =  0x00C41044,
+	HWPfChaPadcfgPcomp1                   =  0x00C41100,
+	HWPfChaPadcfgNcomp1                   =  0x00C41104,
+	HWPfChaPadcfgOdt1                     =  0x00C41108,
+	HWPfChaPadcfgProtect1                 =  0x00C4110C,
+	HWPfChaPreemphasisProtect1            =  0x00C41110,
+	HWPfChaPreemphasisCompen1             =  0x00C41140,
+	HWPfChaPreemphasisOdten1              =  0x00C41144,
+	HWPfChaPadcfgPcomp2                   =  0x00C41200,
+	HWPfChaPadcfgNcomp2                   =  0x00C41204,
+	HWPfChaPadcfgOdt2                     =  0x00C41208,
+	HWPfChaPadcfgProtect2                 =  0x00C4120C,
+	HWPfChaPreemphasisProtect2            =  0x00C41210,
+	HWPfChaPreemphasisCompen2             =  0x00C41240,
+	HWPfChaPreemphasisOdten4              =  0x00C41444,
+	HWPfChaPreemphasisOdten2              =  0x00C41244,
+	HWPfChaPadcfgPcomp3                   =  0x00C41300,
+	HWPfChaPadcfgNcomp3                   =  0x00C41304,
+	HWPfChaPadcfgOdt3                     =  0x00C41308,
+	HWPfChaPadcfgProtect3                 =  0x00C4130C,
+	HWPfChaPreemphasisProtect3            =  0x00C41310,
+	HWPfChaPreemphasisCompen3             =  0x00C41340,
+	HWPfChaPreemphasisOdten3              =  0x00C41344,
+	HWPfChaPadcfgPcomp4                   =  0x00C41400,
+	HWPfChaPadcfgNcomp4                   =  0x00C41404,
+	HWPfChaPadcfgOdt4                     =  0x00C41408,
+	HWPfChaPadcfgProtect4                 =  0x00C4140C,
+	HWPfChaPreemphasisProtect4            =  0x00C41410,
+	HWPfChaPreemphasisCompen4             =  0x00C41440,
+	HWPfHiVfToPfDbellVf                   =  0x00C80000,
+	HWPfHiPfToVfDbellVf                   =  0x00C80008,
+	HWPfHiInfoRingBaseLoVf                =  0x00C80010,
+	HWPfHiInfoRingBaseHiVf                =  0x00C80014,
+	HWPfHiInfoRingPointerVf               =  0x00C80018,
+	HWPfHiInfoRingIntWrEnVf               =  0x00C80020,
+	HWPfHiInfoRingPf2VfWrEnVf             =  0x00C80024,
+	HWPfHiMsixVectorMapperVf              =  0x00C80060,
+	HWPfHiModuleVersionReg                =  0x00C84000,
+	HWPfHiIosf2axiErrLogReg               =  0x00C84004,
+	HWPfHiHardResetReg                    =  0x00C84008,
+	HWPfHi5GHardResetReg                  =  0x00C8400C,
+	HWPfHiInfoRingBaseLoRegPf             =  0x00C84010,
+	HWPfHiInfoRingBaseHiRegPf             =  0x00C84014,
+	HWPfHiInfoRingPointerRegPf            =  0x00C84018,
+	HWPfHiInfoRingIntWrEnRegPf            =  0x00C84020,
+	HWPfHiInfoRingVf2pfLoWrEnReg          =  0x00C84024,
+	HWPfHiInfoRingVf2pfHiWrEnReg          =  0x00C84028,
+	HWPfHiLogParityErrStatusReg           =  0x00C8402C,
+	HWPfHiLogDataParityErrorVfStatusLo    =  0x00C84030,
+	HWPfHiLogDataParityErrorVfStatusHi    =  0x00C84034,
+	HWPfHiBlockTransmitOnErrorEn          =  0x00C84038,
+	HWPfHiCfgMsiIntWrEnRegPf              =  0x00C84040,
+	HWPfHiCfgMsiVf2pfLoWrEnReg            =  0x00C84044,
+	HWPfHiCfgMsiVf2pfHighWrEnReg          =  0x00C84048,
+	HWPfHiMsixVectorMapperPf              =  0x00C84060,
+	HWPfHiApbWrWaitTime                   =  0x00C84100,
+	HWPfHiXCounterMaxValue                =  0x00C84104,
+	HWPfHiPfMode                          =  0x00C84108,
+	HWPfHiClkGateHystReg                  =  0x00C8410C,
+	HWPfHiSnoopBitsReg                    =  0x00C84110,
+	HWPfHiMsiDropEnableReg                =  0x00C84114,
+	HWPfHiMsiStatReg                      =  0x00C84120,
+	HWPfHiFifoOflStatReg                  =  0x00C84124,
+	HWPfHiHiDebugReg                      =  0x00C841F4,
+	HWPfHiDebugMemSnoopMsiFifo            =  0x00C841F8,
+	HWPfHiDebugMemSnoopInputFifo          =  0x00C841FC,
+	HWPfHiMsixMappingConfig               =  0x00C84200,
+	HWPfHiErrInjectReg                    =  0x00C84204,
+	HWPfHiErrStatusReg                    =  0x00C84208,
+	HWPfHiErrMaskReg                      =  0x00C8420C,
+	HWPfHiErrFatalReg                     =  0x00C84210,
+	HWPfHiJunkReg                         =  0x00C8FF00,
+	HWPfDdrUmmcVer                        =  0x00D00000,
+	HWPfDdrUmmcCap                        =  0x00D00010,
+	HWPfDdrUmmcCtrl                       =  0x00D00020,
+	HWPfDdrMpcPe                          =  0x00D00080,
+	HWPfDdrMpcPpri3                       =  0x00D00090,
+	HWPfDdrMpcPpri2                       =  0x00D000A0,
+	HWPfDdrMpcPpri1                       =  0x00D000B0,
+	HWPfDdrMpcPpri0                       =  0x00D000C0,
+	HWPfDdrMpcPrwgrpCtrl                  =  0x00D000D0,
+	HWPfDdrMpcPbw7                        =  0x00D000E0,
+	HWPfDdrMpcPbw6                        =  0x00D000F0,
+	HWPfDdrMpcPbw5                        =  0x00D00100,
+	HWPfDdrMpcPbw4                        =  0x00D00110,
+	HWPfDdrMpcPbw3                        =  0x00D00120,
+	HWPfDdrMpcPbw2                        =  0x00D00130,
+	HWPfDdrMpcPbw1                        =  0x00D00140,
+	HWPfDdrMpcPbw0                        =  0x00D00150,
+	HwPfDdrUmmcEccErrInj                  =  0x00D00190,
+	HWPfDdrMemoryInit                     =  0x00D00200,
+	HWPfDdrMemoryInitDone                 =  0x00D00210,
+	HWPfDdrMemInitPhyTrng0                =  0x00D00240,
+	HWPfDdrMemInitPhyTrng1                =  0x00D00250,
+	HWPfDdrMemInitPhyTrng2                =  0x00D00260,
+	HWPfDdrMemInitPhyTrng3                =  0x00D00270,
+	HWPfDdrBcDram                         =  0x00D003C0,
+	HWPfDdrBcAddrMap                      =  0x00D003D0,
+	HWPfDdrBcRef                          =  0x00D003E0,
+	HWPfDdrBcTim0                         =  0x00D00400,
+	HWPfDdrBcTim1                         =  0x00D00410,
+	HWPfDdrBcTim2                         =  0x00D00420,
+	HWPfDdrBcTim3                         =  0x00D00430,
+	HWPfDdrBcTim4                         =  0x00D00440,
+	HWPfDdrBcTim5                         =  0x00D00450,
+	HWPfDdrBcTim6                         =  0x00D00460,
+	HWPfDdrBcTim7                         =  0x00D00470,
+	HWPfDdrBcTim8                         =  0x00D00480,
+	HWPfDdrBcTim9                         =  0x00D00490,
+	HWPfDdrBcTim10                        =  0x00D004A0,
+	HWPfDdrBcTim12                        =  0x00D004C0,
+	HWPfDdrDfiInit                        =  0x00D004D0,
+	HWPfDdrDfiInitComplete                =  0x00D004E0,
+	HWPfDdrDfiTim0                        =  0x00D004F0,
+	HWPfDdrDfiTim1                        =  0x00D00500,
+	HWPfDdrDfiPhyUpdEn                    =  0x00D00530,
+	HWPfDdrMemStatus                      =  0x00D00540,
+	HWPfDdrUmmcErrStatus                  =  0x00D00550,
+	HWPfDdrUmmcIntStatus                  =  0x00D00560,
+	HWPfDdrUmmcIntEn                      =  0x00D00570,
+	HWPfDdrPhyRdLatency                   =  0x00D48400,
+	HWPfDdrPhyRdLatencyDbi                =  0x00D48410,
+	HWPfDdrPhyWrLatency                   =  0x00D48420,
+	HWPfDdrPhyTrngType                    =  0x00D48430,
+	HWPfDdrPhyMrsTiming2                  =  0x00D48440,
+	HWPfDdrPhyMrsTiming0                  =  0x00D48450,
+	HWPfDdrPhyMrsTiming1                  =  0x00D48460,
+	HWPfDdrPhyDramTmrd                    =  0x00D48470,
+	HWPfDdrPhyDramTmod                    =  0x00D48480,
+	HWPfDdrPhyDramTwpre                   =  0x00D48490,
+	HWPfDdrPhyDramTrfc                    =  0x00D484A0,
+	HWPfDdrPhyDramTrwtp                   =  0x00D484B0,
+	HWPfDdrPhyMr01Dimm                    =  0x00D484C0,
+	HWPfDdrPhyMr01DimmDbi                 =  0x00D484D0,
+	HWPfDdrPhyMr23Dimm                    =  0x00D484E0,
+	HWPfDdrPhyMr45Dimm                    =  0x00D484F0,
+	HWPfDdrPhyMr67Dimm                    =  0x00D48500,
+	HWPfDdrPhyWrlvlWwRdlvlRr              =  0x00D48510,
+	HWPfDdrPhyOdtEn                       =  0x00D48520,
+	HWPfDdrPhyFastTrng                    =  0x00D48530,
+	HWPfDdrPhyDynTrngGap                  =  0x00D48540,
+	HWPfDdrPhyDynRcalGap                  =  0x00D48550,
+	HWPfDdrPhyIdletimeout                 =  0x00D48560,
+	HWPfDdrPhyRstCkeGap                   =  0x00D48570,
+	HWPfDdrPhyCkeMrsGap                   =  0x00D48580,
+	HWPfDdrPhyMemVrefMidVal               =  0x00D48590,
+	HWPfDdrPhyVrefStep                    =  0x00D485A0,
+	HWPfDdrPhyVrefThreshold               =  0x00D485B0,
+	HWPfDdrPhyPhyVrefMidVal               =  0x00D485C0,
+	HWPfDdrPhyDqsCountMax                 =  0x00D485D0,
+	HWPfDdrPhyDqsCountNum                 =  0x00D485E0,
+	HWPfDdrPhyDramRow                     =  0x00D485F0,
+	HWPfDdrPhyDramCol                     =  0x00D48600,
+	HWPfDdrPhyDramBgBa                    =  0x00D48610,
+	HWPfDdrPhyDynamicUpdreqrel            =  0x00D48620,
+	HWPfDdrPhyVrefLimits                  =  0x00D48630,
+	HWPfDdrPhyIdtmTcStatus                =  0x00D6C020,
+	HWPfDdrPhyIdtmFwVersion               =  0x00D6C410,
+	HWPfDdrPhyRdlvlGateInitDelay          =  0x00D70000,
+	HWPfDdrPhyRdenSmplabc                 =  0x00D70008,
+	HWPfDdrPhyVrefNibble0                 =  0x00D7000C,
+	HWPfDdrPhyVrefNibble1                 =  0x00D70010,
+	HWPfDdrPhyRdlvlGateDqsSmpl0           =  0x00D70014,
+	HWPfDdrPhyRdlvlGateDqsSmpl1           =  0x00D70018,
+	HWPfDdrPhyRdlvlGateDqsSmpl2           =  0x00D7001C,
+	HWPfDdrPhyDqsCount                    =  0x00D70020,
+	HWPfDdrPhyWrlvlRdlvlGateStatus        =  0x00D70024,
+	HWPfDdrPhyErrorFlags                  =  0x00D70028,
+	HWPfDdrPhyPowerDown                   =  0x00D70030,
+	HWPfDdrPhyPrbsSeedByte0               =  0x00D70034,
+	HWPfDdrPhyPrbsSeedByte1               =  0x00D70038,
+	HWPfDdrPhyPcompDq                     =  0x00D70040,
+	HWPfDdrPhyNcompDq                     =  0x00D70044,
+	HWPfDdrPhyPcompDqs                    =  0x00D70048,
+	HWPfDdrPhyNcompDqs                    =  0x00D7004C,
+	HWPfDdrPhyPcompCmd                    =  0x00D70050,
+	HWPfDdrPhyNcompCmd                    =  0x00D70054,
+	HWPfDdrPhyPcompCk                     =  0x00D70058,
+	HWPfDdrPhyNcompCk                     =  0x00D7005C,
+	HWPfDdrPhyRcalOdtDq                   =  0x00D70060,
+	HWPfDdrPhyRcalOdtDqs                  =  0x00D70064,
+	HWPfDdrPhyRcalMask1                   =  0x00D70068,
+	HWPfDdrPhyRcalMask2                   =  0x00D7006C,
+	HWPfDdrPhyRcalCtrl                    =  0x00D70070,
+	HWPfDdrPhyRcalCnt                     =  0x00D70074,
+	HWPfDdrPhyRcalOverride                =  0x00D70078,
+	HWPfDdrPhyRcalGateen                  =  0x00D7007C,
+	HWPfDdrPhyCtrl                        =  0x00D70080,
+	HWPfDdrPhyWrlvlAlg                    =  0x00D70084,
+	HWPfDdrPhyRcalVreftTxcmdOdt           =  0x00D70088,
+	HWPfDdrPhyRdlvlGateParam              =  0x00D7008C,
+	HWPfDdrPhyRdlvlGateParam2             =  0x00D70090,
+	HWPfDdrPhyRcalVreftTxdata             =  0x00D70094,
+	HWPfDdrPhyCmdIntDelay                 =  0x00D700A4,
+	HWPfDdrPhyAlertN                      =  0x00D700A8,
+	HWPfDdrPhyTrngReqWpre2tck             =  0x00D700AC,
+	HWPfDdrPhyCmdPhaseSel                 =  0x00D700B4,
+	HWPfDdrPhyCmdDcdl                     =  0x00D700B8,
+	HWPfDdrPhyCkDcdl                      =  0x00D700BC,
+	HWPfDdrPhySwTrngCtrl1                 =  0x00D700C0,
+	HWPfDdrPhySwTrngCtrl2                 =  0x00D700C4,
+	HWPfDdrPhyRcalPcompRden               =  0x00D700C8,
+	HWPfDdrPhyRcalNcompRden               =  0x00D700CC,
+	HWPfDdrPhyRcalCompen                  =  0x00D700D0,
+	HWPfDdrPhySwTrngRdqs                  =  0x00D700D4,
+	HWPfDdrPhySwTrngWdqs                  =  0x00D700D8,
+	HWPfDdrPhySwTrngRdena                 =  0x00D700DC,
+	HWPfDdrPhySwTrngRdenb                 =  0x00D700E0,
+	HWPfDdrPhySwTrngRdenc                 =  0x00D700E4,
+	HWPfDdrPhySwTrngWdq                   =  0x00D700E8,
+	HWPfDdrPhySwTrngRdq                   =  0x00D700EC,
+	HWPfDdrPhyPcfgHmValue                 =  0x00D700F0,
+	HWPfDdrPhyPcfgTimerValue              =  0x00D700F4,
+	HWPfDdrPhyPcfgSoftwareTraining        =  0x00D700F8,
+	HWPfDdrPhyPcfgMcStatus                =  0x00D700FC,
+	HWPfDdrPhyWrlvlPhRank0                =  0x00D70100,
+	HWPfDdrPhyRdenPhRank0                 =  0x00D70104,
+	HWPfDdrPhyRdenIntRank0                =  0x00D70108,
+	HWPfDdrPhyRdqsDcdlRank0               =  0x00D7010C,
+	HWPfDdrPhyRdqsShadowDcdlRank0         =  0x00D70110,
+	HWPfDdrPhyWdqsDcdlRank0               =  0x00D70114,
+	HWPfDdrPhyWdmDcdlShadowRank0          =  0x00D70118,
+	HWPfDdrPhyWdmDcdlRank0                =  0x00D7011C,
+	HWPfDdrPhyDbiDcdlRank0                =  0x00D70120,
+	HWPfDdrPhyRdenDcdlaRank0              =  0x00D70124,
+	HWPfDdrPhyDbiDcdlShadowRank0          =  0x00D70128,
+	HWPfDdrPhyRdenDcdlbRank0              =  0x00D7012C,
+	HWPfDdrPhyWdqsShadowDcdlRank0         =  0x00D70130,
+	HWPfDdrPhyRdenDcdlcRank0              =  0x00D70134,
+	HWPfDdrPhyRdenShadowDcdlaRank0        =  0x00D70138,
+	HWPfDdrPhyWrlvlIntRank0               =  0x00D7013C,
+	HWPfDdrPhyRdqDcdlBit0Rank0            =  0x00D70200,
+	HWPfDdrPhyRdqDcdlShadowBit0Rank0      =  0x00D70204,
+	HWPfDdrPhyWdqDcdlBit0Rank0            =  0x00D70208,
+	HWPfDdrPhyWdqDcdlShadowBit0Rank0      =  0x00D7020C,
+	HWPfDdrPhyRdqDcdlBit1Rank0            =  0x00D70240,
+	HWPfDdrPhyRdqDcdlShadowBit1Rank0      =  0x00D70244,
+	HWPfDdrPhyWdqDcdlBit1Rank0            =  0x00D70248,
+	HWPfDdrPhyWdqDcdlShadowBit1Rank0      =  0x00D7024C,
+	HWPfDdrPhyRdqDcdlBit2Rank0            =  0x00D70280,
+	HWPfDdrPhyRdqDcdlShadowBit2Rank0      =  0x00D70284,
+	HWPfDdrPhyWdqDcdlBit2Rank0            =  0x00D70288,
+	HWPfDdrPhyWdqDcdlShadowBit2Rank0      =  0x00D7028C,
+	HWPfDdrPhyRdqDcdlBit3Rank0            =  0x00D702C0,
+	HWPfDdrPhyRdqDcdlShadowBit3Rank0      =  0x00D702C4,
+	HWPfDdrPhyWdqDcdlBit3Rank0            =  0x00D702C8,
+	HWPfDdrPhyWdqDcdlShadowBit3Rank0      =  0x00D702CC,
+	HWPfDdrPhyRdqDcdlBit4Rank0            =  0x00D70300,
+	HWPfDdrPhyRdqDcdlShadowBit4Rank0      =  0x00D70304,
+	HWPfDdrPhyWdqDcdlBit4Rank0            =  0x00D70308,
+	HWPfDdrPhyWdqDcdlShadowBit4Rank0      =  0x00D7030C,
+	HWPfDdrPhyRdqDcdlBit5Rank0            =  0x00D70340,
+	HWPfDdrPhyRdqDcdlShadowBit5Rank0      =  0x00D70344,
+	HWPfDdrPhyWdqDcdlBit5Rank0            =  0x00D70348,
+	HWPfDdrPhyWdqDcdlShadowBit5Rank0      =  0x00D7034C,
+	HWPfDdrPhyRdqDcdlBit6Rank0            =  0x00D70380,
+	HWPfDdrPhyRdqDcdlShadowBit6Rank0      =  0x00D70384,
+	HWPfDdrPhyWdqDcdlBit6Rank0            =  0x00D70388,
+	HWPfDdrPhyWdqDcdlShadowBit6Rank0      =  0x00D7038C,
+	HWPfDdrPhyRdqDcdlBit7Rank0            =  0x00D703C0,
+	HWPfDdrPhyRdqDcdlShadowBit7Rank0      =  0x00D703C4,
+	HWPfDdrPhyWdqDcdlBit7Rank0            =  0x00D703C8,
+	HWPfDdrPhyWdqDcdlShadowBit7Rank0      =  0x00D703CC,
+	HWPfDdrPhyIdtmStatus                  =  0x00D740D0,
+	HWPfDdrPhyIdtmError                   =  0x00D74110,
+	HWPfDdrPhyIdtmDebug                   =  0x00D74120,
+	HWPfDdrPhyIdtmDebugInt                =  0x00D74130,
+	HwPfPcieLnAsicCfgovr                  =  0x00D80000,
+	HwPfPcieLnAclkmixer                   =  0x00D80004,
+	HwPfPcieLnTxrampfreq                  =  0x00D80008,
+	HwPfPcieLnLanetest                    =  0x00D8000C,
+	HwPfPcieLnDcctrl                      =  0x00D80010,
+	HwPfPcieLnDccmeas                     =  0x00D80014,
+	HwPfPcieLnDccovrAclk                  =  0x00D80018,
+	HwPfPcieLnDccovrTxa                   =  0x00D8001C,
+	HwPfPcieLnDccovrTxk                   =  0x00D80020,
+	HwPfPcieLnDccovrDclk                  =  0x00D80024,
+	HwPfPcieLnDccovrEclk                  =  0x00D80028,
+	HwPfPcieLnDcctrimAclk                 =  0x00D8002C,
+	HwPfPcieLnDcctrimTx                   =  0x00D80030,
+	HwPfPcieLnDcctrimDclk                 =  0x00D80034,
+	HwPfPcieLnDcctrimEclk                 =  0x00D80038,
+	HwPfPcieLnQuadCtrl                    =  0x00D8003C,
+	HwPfPcieLnQuadCorrIndex               =  0x00D80040,
+	HwPfPcieLnQuadCorrStatus              =  0x00D80044,
+	HwPfPcieLnAsicRxovr1                  =  0x00D80048,
+	HwPfPcieLnAsicRxovr2                  =  0x00D8004C,
+	HwPfPcieLnAsicEqinfovr                =  0x00D80050,
+	HwPfPcieLnRxcsr                       =  0x00D80054,
+	HwPfPcieLnRxfectrl                    =  0x00D80058,
+	HwPfPcieLnRxtest                      =  0x00D8005C,
+	HwPfPcieLnEscount                     =  0x00D80060,
+	HwPfPcieLnCdrctrl                     =  0x00D80064,
+	HwPfPcieLnCdrctrl2                    =  0x00D80068,
+	HwPfPcieLnCdrcfg0Ctrl0                =  0x00D8006C,
+	HwPfPcieLnCdrcfg0Ctrl1                =  0x00D80070,
+	HwPfPcieLnCdrcfg0Ctrl2                =  0x00D80074,
+	HwPfPcieLnCdrcfg1Ctrl0                =  0x00D80078,
+	HwPfPcieLnCdrcfg1Ctrl1                =  0x00D8007C,
+	HwPfPcieLnCdrcfg1Ctrl2                =  0x00D80080,
+	HwPfPcieLnCdrcfg2Ctrl0                =  0x00D80084,
+	HwPfPcieLnCdrcfg2Ctrl1                =  0x00D80088,
+	HwPfPcieLnCdrcfg2Ctrl2                =  0x00D8008C,
+	HwPfPcieLnCdrcfg3Ctrl0                =  0x00D80090,
+	HwPfPcieLnCdrcfg3Ctrl1                =  0x00D80094,
+	HwPfPcieLnCdrcfg3Ctrl2                =  0x00D80098,
+	HwPfPcieLnCdrphase                    =  0x00D8009C,
+	HwPfPcieLnCdrfreq                     =  0x00D800A0,
+	HwPfPcieLnCdrstatusPhase              =  0x00D800A4,
+	HwPfPcieLnCdrstatusFreq               =  0x00D800A8,
+	HwPfPcieLnCdroffset                   =  0x00D800AC,
+	HwPfPcieLnRxvosctl                    =  0x00D800B0,
+	HwPfPcieLnRxvosctl2                   =  0x00D800B4,
+	HwPfPcieLnRxlosctl                    =  0x00D800B8,
+	HwPfPcieLnRxlos                       =  0x00D800BC,
+	HwPfPcieLnRxlosvval                   =  0x00D800C0,
+	HwPfPcieLnRxvosd0                     =  0x00D800C4,
+	HwPfPcieLnRxvosd1                     =  0x00D800C8,
+	HwPfPcieLnRxvosep0                    =  0x00D800CC,
+	HwPfPcieLnRxvosep1                    =  0x00D800D0,
+	HwPfPcieLnRxvosen0                    =  0x00D800D4,
+	HwPfPcieLnRxvosen1                    =  0x00D800D8,
+	HwPfPcieLnRxvosafe                    =  0x00D800DC,
+	HwPfPcieLnRxvosa0                     =  0x00D800E0,
+	HwPfPcieLnRxvosa0Out                  =  0x00D800E4,
+	HwPfPcieLnRxvosa1                     =  0x00D800E8,
+	HwPfPcieLnRxvosa1Out                  =  0x00D800EC,
+	HwPfPcieLnRxmisc                      =  0x00D800F0,
+	HwPfPcieLnRxbeacon                    =  0x00D800F4,
+	HwPfPcieLnRxdssout                    =  0x00D800F8,
+	HwPfPcieLnRxdssout2                   =  0x00D800FC,
+	HwPfPcieLnAlphapctrl                  =  0x00D80100,
+	HwPfPcieLnAlphanctrl                  =  0x00D80104,
+	HwPfPcieLnAdaptctrl                   =  0x00D80108,
+	HwPfPcieLnAdaptctrl1                  =  0x00D8010C,
+	HwPfPcieLnAdaptstatus                 =  0x00D80110,
+	HwPfPcieLnAdaptvga1                   =  0x00D80114,
+	HwPfPcieLnAdaptvga2                   =  0x00D80118,
+	HwPfPcieLnAdaptvga3                   =  0x00D8011C,
+	HwPfPcieLnAdaptvga4                   =  0x00D80120,
+	HwPfPcieLnAdaptboost1                 =  0x00D80124,
+	HwPfPcieLnAdaptboost2                 =  0x00D80128,
+	HwPfPcieLnAdaptboost3                 =  0x00D8012C,
+	HwPfPcieLnAdaptboost4                 =  0x00D80130,
+	HwPfPcieLnAdaptsslms1                 =  0x00D80134,
+	HwPfPcieLnAdaptsslms2                 =  0x00D80138,
+	HwPfPcieLnAdaptvgaStatus              =  0x00D8013C,
+	HwPfPcieLnAdaptboostStatus            =  0x00D80140,
+	HwPfPcieLnAdaptsslmsStatus1           =  0x00D80144,
+	HwPfPcieLnAdaptsslmsStatus2           =  0x00D80148,
+	HwPfPcieLnAfectrl1                    =  0x00D8014C,
+	HwPfPcieLnAfectrl2                    =  0x00D80150,
+	HwPfPcieLnAfectrl3                    =  0x00D80154,
+	HwPfPcieLnAfedefault1                 =  0x00D80158,
+	HwPfPcieLnAfedefault2                 =  0x00D8015C,
+	HwPfPcieLnDfectrl1                    =  0x00D80160,
+	HwPfPcieLnDfectrl2                    =  0x00D80164,
+	HwPfPcieLnDfectrl3                    =  0x00D80168,
+	HwPfPcieLnDfectrl4                    =  0x00D8016C,
+	HwPfPcieLnDfectrl5                    =  0x00D80170,
+	HwPfPcieLnDfectrl6                    =  0x00D80174,
+	HwPfPcieLnAfestatus1                  =  0x00D80178,
+	HwPfPcieLnAfestatus2                  =  0x00D8017C,
+	HwPfPcieLnDfestatus1                  =  0x00D80180,
+	HwPfPcieLnDfestatus2                  =  0x00D80184,
+	HwPfPcieLnDfestatus3                  =  0x00D80188,
+	HwPfPcieLnDfestatus4                  =  0x00D8018C,
+	HwPfPcieLnDfestatus5                  =  0x00D80190,
+	HwPfPcieLnAlphastatus                 =  0x00D80194,
+	HwPfPcieLnFomctrl1                    =  0x00D80198,
+	HwPfPcieLnFomctrl2                    =  0x00D8019C,
+	HwPfPcieLnFomctrl3                    =  0x00D801A0,
+	HwPfPcieLnAclkcalStatus               =  0x00D801A4,
+	HwPfPcieLnOffscorrStatus              =  0x00D801A8,
+	HwPfPcieLnEyewidthStatus              =  0x00D801AC,
+	HwPfPcieLnEyeheightStatus             =  0x00D801B0,
+	HwPfPcieLnAsicTxovr1                  =  0x00D801B4,
+	HwPfPcieLnAsicTxovr2                  =  0x00D801B8,
+	HwPfPcieLnAsicTxovr3                  =  0x00D801BC,
+	HwPfPcieLnTxbiasadjOvr                =  0x00D801C0,
+	HwPfPcieLnTxcsr                       =  0x00D801C4,
+	HwPfPcieLnTxtest                      =  0x00D801C8,
+	HwPfPcieLnTxtestword                  =  0x00D801CC,
+	HwPfPcieLnTxtestwordHigh              =  0x00D801D0,
+	HwPfPcieLnTxdrive                     =  0x00D801D4,
+	HwPfPcieLnMtcsLn                      =  0x00D801D8,
+	HwPfPcieLnStatsumLn                   =  0x00D801DC,
+	HwPfPcieLnRcbusScratch                =  0x00D801E0,
+	HwPfPcieLnRcbusMinorrev               =  0x00D801F0,
+	HwPfPcieLnRcbusMajorrev               =  0x00D801F4,
+	HwPfPcieLnRcbusBlocktype              =  0x00D801F8,
+	HwPfPcieSupPllcsr                     =  0x00D80800,
+	HwPfPcieSupPlldiv                     =  0x00D80804,
+	HwPfPcieSupPllcal                     =  0x00D80808,
+	HwPfPcieSupPllcalsts                  =  0x00D8080C,
+	HwPfPcieSupPllmeas                    =  0x00D80810,
+	HwPfPcieSupPlldactrim                 =  0x00D80814,
+	HwPfPcieSupPllbiastrim                =  0x00D80818,
+	HwPfPcieSupPllbwtrim                  =  0x00D8081C,
+	HwPfPcieSupPllcaldly                  =  0x00D80820,
+	HwPfPcieSupRefclkonpclkctrl           =  0x00D80824,
+	HwPfPcieSupPclkdelay                  =  0x00D80828,
+	HwPfPcieSupPhyconfig                  =  0x00D8082C,
+	HwPfPcieSupRcalIntf                   =  0x00D80830,
+	HwPfPcieSupAuxcsr                     =  0x00D80834,
+	HwPfPcieSupVref                       =  0x00D80838,
+	HwPfPcieSupLinkmode                   =  0x00D8083C,
+	HwPfPcieSupRrefcalctl                 =  0x00D80840,
+	HwPfPcieSupRrefcal                    =  0x00D80844,
+	HwPfPcieSupRrefcaldly                 =  0x00D80848,
+	HwPfPcieSupTximpcalctl                =  0x00D8084C,
+	HwPfPcieSupTximpcal                   =  0x00D80850,
+	HwPfPcieSupTximpoffset                =  0x00D80854,
+	HwPfPcieSupTximpcaldly                =  0x00D80858,
+	HwPfPcieSupRximpcalctl                =  0x00D8085C,
+	HwPfPcieSupRximpcal                   =  0x00D80860,
+	HwPfPcieSupRximpoffset                =  0x00D80864,
+	HwPfPcieSupRximpcaldly                =  0x00D80868,
+	HwPfPcieSupFence                      =  0x00D8086C,
+	HwPfPcieSupMtcs                       =  0x00D80870,
+	HwPfPcieSupStatsum                    =  0x00D809B8,
+	HwPfPcieRomVersion                    =  0x00D80B0C,
+	HwPfPciePcsDpStatus0                  =  0x00D81000,
+	HwPfPciePcsDpControl0                 =  0x00D81004,
+	HwPfPciePcsPmaStatusLane0             =  0x00D81008,
+	HwPfPciePcsPipeStatusLane0            =  0x00D8100C,
+	HwPfPciePcsTxdeemph0Lane0             =  0x00D81010,
+	HwPfPciePcsTxdeemph1Lane0             =  0x00D81014,
+	HwPfPciePcsInternalStatusLane0        =  0x00D81018,
+	HwPfPciePcsDpStatus1                  =  0x00D8101C,
+	HwPfPciePcsDpControl1                 =  0x00D81020,
+	HwPfPciePcsPmaStatusLane1             =  0x00D81024,
+	HwPfPciePcsPipeStatusLane1            =  0x00D81028,
+	HwPfPciePcsTxdeemph0Lane1             =  0x00D8102C,
+	HwPfPciePcsTxdeemph1Lane1             =  0x00D81030,
+	HwPfPciePcsInternalStatusLane1        =  0x00D81034,
+	HwPfPciePcsDpStatus2                  =  0x00D81038,
+	HwPfPciePcsDpControl2                 =  0x00D8103C,
+	HwPfPciePcsPmaStatusLane2             =  0x00D81040,
+	HwPfPciePcsPipeStatusLane2            =  0x00D81044,
+	HwPfPciePcsTxdeemph0Lane2             =  0x00D81048,
+	HwPfPciePcsTxdeemph1Lane2             =  0x00D8104C,
+	HwPfPciePcsInternalStatusLane2        =  0x00D81050,
+	HwPfPciePcsDpStatus3                  =  0x00D81054,
+	HwPfPciePcsDpControl3                 =  0x00D81058,
+	HwPfPciePcsPmaStatusLane3             =  0x00D8105C,
+	HwPfPciePcsPipeStatusLane3            =  0x00D81060,
+	HwPfPciePcsTxdeemph0Lane3             =  0x00D81064,
+	HwPfPciePcsTxdeemph1Lane3             =  0x00D81068,
+	HwPfPciePcsInternalStatusLane3        =  0x00D8106C,
+	HwPfPciePcsEbStatus0                  =  0x00D81070,
+	HwPfPciePcsEbStatus1                  =  0x00D81074,
+	HwPfPciePcsEbStatus2                  =  0x00D81078,
+	HwPfPciePcsEbStatus3                  =  0x00D8107C,
+	HwPfPciePcsPllSettingPcieG1           =  0x00D81088,
+	HwPfPciePcsPllSettingPcieG2           =  0x00D8108C,
+	HwPfPciePcsPllSettingPcieG3           =  0x00D81090,
+	HwPfPciePcsControl                    =  0x00D81094,
+	HwPfPciePcsEqControl                  =  0x00D81098,
+	HwPfPciePcsEqTimer                    =  0x00D8109C,
+	HwPfPciePcsEqErrStatus                =  0x00D810A0,
+	HwPfPciePcsEqErrCount                 =  0x00D810A4,
+	HwPfPciePcsStatus                     =  0x00D810A8,
+	HwPfPciePcsMiscRegister               =  0x00D810AC,
+	HwPfPciePcsObsControl                 =  0x00D810B0,
+	HwPfPciePcsPrbsCount0                 =  0x00D81200,
+	HwPfPciePcsBistControl0               =  0x00D81204,
+	HwPfPciePcsBistStaticWord00           =  0x00D81208,
+	HwPfPciePcsBistStaticWord10           =  0x00D8120C,
+	HwPfPciePcsBistStaticWord20           =  0x00D81210,
+	HwPfPciePcsBistStaticWord30           =  0x00D81214,
+	HwPfPciePcsPrbsCount1                 =  0x00D81220,
+	HwPfPciePcsBistControl1               =  0x00D81224,
+	HwPfPciePcsBistStaticWord01           =  0x00D81228,
+	HwPfPciePcsBistStaticWord11           =  0x00D8122C,
+	HwPfPciePcsBistStaticWord21           =  0x00D81230,
+	HwPfPciePcsBistStaticWord31           =  0x00D81234,
+	HwPfPciePcsPrbsCount2                 =  0x00D81240,
+	HwPfPciePcsBistControl2               =  0x00D81244,
+	HwPfPciePcsBistStaticWord02           =  0x00D81248,
+	HwPfPciePcsBistStaticWord12           =  0x00D8124C,
+	HwPfPciePcsBistStaticWord22           =  0x00D81250,
+	HwPfPciePcsBistStaticWord32           =  0x00D81254,
+	HwPfPciePcsPrbsCount3                 =  0x00D81260,
+	HwPfPciePcsBistControl3               =  0x00D81264,
+	HwPfPciePcsBistStaticWord03           =  0x00D81268,
+	HwPfPciePcsBistStaticWord13           =  0x00D8126C,
+	HwPfPciePcsBistStaticWord23           =  0x00D81270,
+	HwPfPciePcsBistStaticWord33           =  0x00D81274,
+	HwPfPcieGpexLtssmStateCntrl           =  0x00D90400,
+	HwPfPcieGpexLtssmStateStatus          =  0x00D90404,
+	HwPfPcieGpexSkipFreqTimer             =  0x00D90408,
+	HwPfPcieGpexLaneSelect                =  0x00D9040C,
+	HwPfPcieGpexLaneDeskew                =  0x00D90410,
+	HwPfPcieGpexRxErrorStatus             =  0x00D90414,
+	HwPfPcieGpexLaneNumControl            =  0x00D90418,
+	HwPfPcieGpexNFstControl               =  0x00D9041C,
+	HwPfPcieGpexLinkStatus                =  0x00D90420,
+	HwPfPcieGpexAckReplayTimeout          =  0x00D90438,
+	HwPfPcieGpexSeqNumberStatus           =  0x00D9043C,
+	HwPfPcieGpexCoreClkRatio              =  0x00D90440,
+	HwPfPcieGpexDllTholdControl           =  0x00D90448,
+	HwPfPcieGpexPmTimer                   =  0x00D90450,
+	HwPfPcieGpexPmeTimeout                =  0x00D90454,
+	HwPfPcieGpexAspmL1Timer               =  0x00D90458,
+	HwPfPcieGpexAspmReqTimer              =  0x00D9045C,
+	HwPfPcieGpexAspmL1Dis                 =  0x00D90460,
+	HwPfPcieGpexAdvisoryErrorControl      =  0x00D90468,
+	HwPfPcieGpexId                        =  0x00D90470,
+	HwPfPcieGpexClasscode                 =  0x00D90474,
+	HwPfPcieGpexSubsystemId               =  0x00D90478,
+	HwPfPcieGpexDeviceCapabilities        =  0x00D9047C,
+	HwPfPcieGpexLinkCapabilities          =  0x00D90480,
+	HwPfPcieGpexFunctionNumber            =  0x00D90484,
+	HwPfPcieGpexPmCapabilities            =  0x00D90488,
+	HwPfPcieGpexFunctionSelect            =  0x00D9048C,
+	HwPfPcieGpexErrorCounter              =  0x00D904AC,
+	HwPfPcieGpexConfigReady               =  0x00D904B0,
+	HwPfPcieGpexFcUpdateTimeout           =  0x00D904B8,
+	HwPfPcieGpexFcUpdateTimer             =  0x00D904BC,
+	HwPfPcieGpexVcBufferLoad              =  0x00D904C8,
+	HwPfPcieGpexVcBufferSizeThold         =  0x00D904CC,
+	HwPfPcieGpexVcBufferSelect            =  0x00D904D0,
+	HwPfPcieGpexBarEnable                 =  0x00D904D4,
+	HwPfPcieGpexBarDwordLower             =  0x00D904D8,
+	HwPfPcieGpexBarDwordUpper             =  0x00D904DC,
+	HwPfPcieGpexBarSelect                 =  0x00D904E0,
+	HwPfPcieGpexCreditCounterSelect       =  0x00D904E4,
+	HwPfPcieGpexCreditCounterStatus       =  0x00D904E8,
+	HwPfPcieGpexTlpHeaderSelect           =  0x00D904EC,
+	HwPfPcieGpexTlpHeaderDword0           =  0x00D904F0,
+	HwPfPcieGpexTlpHeaderDword1           =  0x00D904F4,
+	HwPfPcieGpexTlpHeaderDword2           =  0x00D904F8,
+	HwPfPcieGpexTlpHeaderDword3           =  0x00D904FC,
+	HwPfPcieGpexRelaxOrderControl         =  0x00D90500,
+	HwPfPcieGpexBarPrefetch               =  0x00D90504,
+	HwPfPcieGpexFcCheckControl            =  0x00D90508,
+	HwPfPcieGpexFcUpdateTimerTraffic      =  0x00D90518,
+	HwPfPcieGpexPhyControl0               =  0x00D9053C,
+	HwPfPcieGpexPhyControl1               =  0x00D90544,
+	HwPfPcieGpexPhyControl2               =  0x00D9054C,
+	HwPfPcieGpexUserControl0              =  0x00D9055C,
+	HwPfPcieGpexUncorrErrorStatus         =  0x00D905F0,
+	HwPfPcieGpexRxCplError                =  0x00D90620,
+	HwPfPcieGpexRxCplErrorDword0          =  0x00D90624,
+	HwPfPcieGpexRxCplErrorDword1          =  0x00D90628,
+	HwPfPcieGpexRxCplErrorDword2          =  0x00D9062C,
+	HwPfPcieGpexPabSwResetEn              =  0x00D90630,
+	HwPfPcieGpexGen3Control0              =  0x00D90634,
+	HwPfPcieGpexGen3Control1              =  0x00D90638,
+	HwPfPcieGpexGen3Control2              =  0x00D9063C,
+	HwPfPcieGpexGen2ControlCsr            =  0x00D90640,
+	HwPfPcieGpexTotalVfInitialVf0         =  0x00D90644,
+	HwPfPcieGpexTotalVfInitialVf1         =  0x00D90648,
+	HwPfPcieGpexSriovLinkDevId0           =  0x00D90684,
+	HwPfPcieGpexSriovLinkDevId1           =  0x00D90688,
+	HwPfPcieGpexSriovPageSize0            =  0x00D906C4,
+	HwPfPcieGpexSriovPageSize1            =  0x00D906C8,
+	HwPfPcieGpexIdVersion                 =  0x00D906FC,
+	HwPfPcieGpexSriovVfOffsetStride0      =  0x00D90704,
+	HwPfPcieGpexSriovVfOffsetStride1      =  0x00D90708,
+	HwPfPcieGpexGen3DeskewControl         =  0x00D907B4,
+	HwPfPcieGpexGen3EqControl             =  0x00D907B8,
+	HwPfPcieGpexBridgeVersion             =  0x00D90800,
+	HwPfPcieGpexBridgeCapability          =  0x00D90804,
+	HwPfPcieGpexBridgeControl             =  0x00D90808,
+	HwPfPcieGpexBridgeStatus              =  0x00D9080C,
+	HwPfPcieGpexEngineActivityStatus      =  0x00D9081C,
+	HwPfPcieGpexEngineResetControl        =  0x00D90820,
+	HwPfPcieGpexAxiPioControl             =  0x00D90840,
+	HwPfPcieGpexAxiPioStatus              =  0x00D90844,
+	HwPfPcieGpexAmbaSlaveCmdStatus        =  0x00D90848,
+	HwPfPcieGpexPexPioControl             =  0x00D908C0,
+	HwPfPcieGpexPexPioStatus              =  0x00D908C4,
+	HwPfPcieGpexAmbaMasterStatus          =  0x00D908C8,
+	HwPfPcieGpexCsrSlaveCmdStatus         =  0x00D90920,
+	HwPfPcieGpexMailboxAxiControl         =  0x00D90A50,
+	HwPfPcieGpexMailboxAxiData            =  0x00D90A54,
+	HwPfPcieGpexMailboxPexControl         =  0x00D90A90,
+	HwPfPcieGpexMailboxPexData            =  0x00D90A94,
+	HwPfPcieGpexPexInterruptEnable        =  0x00D90AD0,
+	HwPfPcieGpexPexInterruptStatus        =  0x00D90AD4,
+	HwPfPcieGpexPexInterruptAxiPioVector  =  0x00D90AD8,
+	HwPfPcieGpexPexInterruptPexPioVector  =  0x00D90AE0,
+	HwPfPcieGpexPexInterruptMiscVector    =  0x00D90AF8,
+	HwPfPcieGpexAmbaInterruptPioEnable    =  0x00D90B00,
+	HwPfPcieGpexAmbaInterruptMiscEnable   =  0x00D90B0C,
+	HwPfPcieGpexAmbaInterruptPioStatus    =  0x00D90B10,
+	HwPfPcieGpexAmbaInterruptMiscStatus   =  0x00D90B1C,
+	HwPfPcieGpexPexPmControl              =  0x00D90B80,
+	HwPfPcieGpexSlotMisc                  =  0x00D90B88,
+	HwPfPcieGpexAxiAddrMappingControl     =  0x00D90BA0,
+	HwPfPcieGpexAxiAddrMappingWindowAxiBase     =  0x00D90BA4,
+	HwPfPcieGpexAxiAddrMappingWindowPexBaseLow  =  0x00D90BA8,
+	HwPfPcieGpexAxiAddrMappingWindowPexBaseHigh =  0x00D90BAC,
+	HwPfPcieGpexPexBarAddrFunc0Bar0       =  0x00D91BA0,
+	HwPfPcieGpexPexBarAddrFunc0Bar1       =  0x00D91BA4,
+	HwPfPcieGpexAxiAddrMappingPcieHdrParam =  0x00D95BA0,
+	HwPfPcieGpexExtAxiAddrMappingAxiBase  =  0x00D980A0,
+	HwPfPcieGpexPexExtBarAddrFunc0Bar0    =  0x00D984A0,
+	HwPfPcieGpexPexExtBarAddrFunc0Bar1    =  0x00D984A4,
+	HwPfPcieGpexAmbaInterruptFlrEnable    =  0x00D9B960,
+	HwPfPcieGpexAmbaInterruptFlrStatus    =  0x00D9B9A0,
+	HwPfPcieGpexExtAxiAddrMappingSize     =  0x00D9BAF0,
+	HwPfPcieGpexPexPioAwcacheControl      =  0x00D9C300,
+	HwPfPcieGpexPexPioArcacheControl      =  0x00D9C304,
+	HwPfPcieGpexPabObSizeControlVc0       =  0x00D9C310
+};
+
+/* TIP PF Interrupt numbers */
+enum {
+	ACC101_PF_INT_QMGR_AQ_OVERFLOW = 0,
+	ACC101_PF_INT_DOORBELL_VF_2_PF = 1,
+	ACC101_PF_INT_DMA_DL_DESC_IRQ = 2,
+	ACC101_PF_INT_DMA_UL_DESC_IRQ = 3,
+	ACC101_PF_INT_DMA_MLD_DESC_IRQ = 4,
+	ACC101_PF_INT_DMA_UL5G_DESC_IRQ = 5,
+	ACC101_PF_INT_DMA_DL5G_DESC_IRQ = 6,
+	ACC101_PF_INT_ILLEGAL_FORMAT = 7,
+	ACC101_PF_INT_QMGR_DISABLED_ACCESS = 8,
+	ACC101_PF_INT_QMGR_AQ_OVERTHRESHOLD = 9,
+	ACC101_PF_INT_ARAM_ACCESS_ERR = 10,
+	ACC101_PF_INT_ARAM_ECC_1BIT_ERR = 11,
+	ACC101_PF_INT_PARITY_ERR = 12,
+	ACC101_PF_INT_QMGR_ERR = 13,
+	ACC101_PF_INT_INT_REQ_OVERFLOW = 14,
+	ACC101_PF_INT_APB_TIMEOUT = 15,
+	ACC100_PF_INT_CORE_HANG = 16,
+	ACC100_PF_INT_CLUSTER_HANG = 17,
+	ACC100_PF_INT_WBB_ERROR = 18,
+	ACC100_PF_INT_5G_EXTRAREAD = 24,
+	ACC100_PF_INT_5G_READ_TIMEOUT = 25,
+	ACC100_PF_INT_5G_ERROR = 26,
+	ACC100_PF_INT_PCIE_ERROR = 27,
+	ACC100_PF_INT_DDR_ERROR = 28,
+	ACC100_PF_INT_MISC_ERROR = 29,
+	ACC100_PF_INT_I2C = 30,
+};
+
+#endif /* ACC101_PF_ENUM_H */
diff --git a/drivers/baseband/acc101/acc101_vf_enum.h b/drivers/baseband/acc101/acc101_vf_enum.h
new file mode 100644
index 0000000..87d24d4
--- /dev/null
+++ b/drivers/baseband/acc101/acc101_vf_enum.h
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef ACC101_VF_ENUM_H
+#define ACC101_VF_ENUM_H
+
+/*
+ * ACC101 Register mapping on VF BAR0
+ * This is automatically generated from RDL, format may change with new RDL
+ */
+enum {
+	HWVfQmgrIngressAq             =  0x00000000,
+	HWVfHiVfToPfDbellVf           =  0x00000800,
+	HWVfHiPfToVfDbellVf           =  0x00000808,
+	HWVfHiInfoRingBaseLoVf        =  0x00000810,
+	HWVfHiInfoRingBaseHiVf        =  0x00000814,
+	HWVfHiInfoRingPointerVf       =  0x00000818,
+	HWVfHiInfoRingIntWrEnVf       =  0x00000820,
+	HWVfHiInfoRingPf2VfWrEnVf     =  0x00000824,
+	HWVfHiMsixVectorMapperVf      =  0x00000860,
+	HWVfDmaFec5GulDescBaseLoRegVf =  0x00000920,
+	HWVfDmaFec5GulDescBaseHiRegVf =  0x00000924,
+	HWVfDmaFec5GulRespPtrLoRegVf  =  0x00000928,
+	HWVfDmaFec5GulRespPtrHiRegVf  =  0x0000092C,
+	HWVfDmaFec5GdlDescBaseLoRegVf =  0x00000940,
+	HWVfDmaFec5GdlDescBaseHiRegVf =  0x00000944,
+	HWVfDmaFec5GdlRespPtrLoRegVf  =  0x00000948,
+	HWVfDmaFec5GdlRespPtrHiRegVf  =  0x0000094C,
+	HWVfDmaFec4GulDescBaseLoRegVf =  0x00000960,
+	HWVfDmaFec4GulDescBaseHiRegVf =  0x00000964,
+	HWVfDmaFec4GulRespPtrLoRegVf  =  0x00000968,
+	HWVfDmaFec4GulRespPtrHiRegVf  =  0x0000096C,
+	HWVfDmaFec4GdlDescBaseLoRegVf =  0x00000980,
+	HWVfDmaFec4GdlDescBaseHiRegVf =  0x00000984,
+	HWVfDmaFec4GdlRespPtrLoRegVf  =  0x00000988,
+	HWVfDmaFec4GdlRespPtrHiRegVf  =  0x0000098C,
+	HWVfDmaDdrBaseRangeRoVf       =  0x000009A0,
+	HWVfQmgrAqResetVf             =  0x00000E00,
+	HWVfQmgrRingSizeVf            =  0x00000E04,
+	HWVfQmgrGrpDepthLog20Vf       =  0x00000E08,
+	HWVfQmgrGrpDepthLog21Vf       =  0x00000E0C,
+	HWVfQmgrGrpFunction0Vf        =  0x00000E10,
+	HWVfQmgrGrpFunction1Vf        =  0x00000E14,
+	HWVfPmACntrlRegVf             =  0x00000F40,
+	HWVfPmACountVf                =  0x00000F48,
+	HWVfPmAKCntLoVf               =  0x00000F50,
+	HWVfPmAKCntHiVf               =  0x00000F54,
+	HWVfPmADeltaCntLoVf           =  0x00000F60,
+	HWVfPmADeltaCntHiVf           =  0x00000F64,
+	HWVfPmBCntrlRegVf             =  0x00000F80,
+	HWVfPmBCountVf                =  0x00000F88,
+	HWVfPmBKCntLoVf               =  0x00000F90,
+	HWVfPmBKCntHiVf               =  0x00000F94,
+	HWVfPmBDeltaCntLoVf           =  0x00000FA0,
+	HWVfPmBDeltaCntHiVf           =  0x00000FA4
+};
+
+/* TIP VF Interrupt numbers */
+enum {
+	ACC101_VF_INT_QMGR_AQ_OVERFLOW = 0,
+	ACC101_VF_INT_DOORBELL_VF_2_PF = 1,
+	ACC101_VF_INT_DMA_DL_DESC_IRQ = 2,
+	ACC101_VF_INT_DMA_UL_DESC_IRQ = 3,
+	ACC101_VF_INT_DMA_MLD_DESC_IRQ = 4,
+	ACC101_VF_INT_DMA_UL5G_DESC_IRQ = 5,
+	ACC101_VF_INT_DMA_DL5G_DESC_IRQ = 6,
+	ACC101_VF_INT_ILLEGAL_FORMAT = 7,
+	ACC101_VF_INT_QMGR_DISABLED_ACCESS = 8,
+	ACC101_VF_INT_QMGR_AQ_OVERTHRESHOLD = 9,
+};
+
+/* TIP PF2VF Comms */
+enum {
+	ACC101_VF2PF_STATUS_REQUEST = 0,
+	ACC101_VF2PF_USING_VF = 1,
+};
+
+#endif /* ACC101_VF_ENUM_H */
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
index 499c341..66641f3 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.h
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -5,6 +5,9 @@
 #ifndef _RTE_ACC101_PMD_H_
 #define _RTE_ACC101_PMD_H_
 
+#include "acc101_pf_enum.h"
+#include "acc101_vf_enum.h"
+
 /* Helper macro for logging */
 #define rte_bbdev_log(level, fmt, ...) \
 	rte_log(RTE_LOG_ ## level, acc101_logtype, fmt "\n", \
@@ -27,6 +30,454 @@
 #define RTE_ACC101_PF_DEVICE_ID        (0x57c4)
 #define RTE_ACC101_VF_DEVICE_ID        (0x57c5)
 
+/* Values used in filling in descriptors */
+#define ACC101_DMA_DESC_TYPE           2
+#define ACC101_DMA_CODE_BLK_MODE       0
+#define ACC101_DMA_BLKID_FCW           1
+#define ACC101_DMA_BLKID_IN            2
+#define ACC101_DMA_BLKID_OUT_ENC       1
+#define ACC101_DMA_BLKID_OUT_HARD      1
+#define ACC101_DMA_BLKID_OUT_SOFT      2
+#define ACC101_DMA_BLKID_OUT_HARQ      3
+#define ACC101_DMA_BLKID_IN_HARQ       3
+
+/* Values used in filling in decode FCWs */
+#define ACC101_FCW_TD_VER              1
+#define ACC101_FCW_TD_EXT_COLD_REG_EN  1
+#define ACC101_FCW_TD_AUTOMAP          0x0f
+#define ACC101_FCW_TD_RVIDX_0          2
+#define ACC101_FCW_TD_RVIDX_1          26
+#define ACC101_FCW_TD_RVIDX_2          50
+#define ACC101_FCW_TD_RVIDX_3          74
+
+/* Values used in writing to the registers */
+#define ACC101_REG_IRQ_EN_ALL          0x1FF83FF  /* Enable all interrupts */
+
+/* Number of elements in HARQ layout memory
+ * 128M x 32kB = 4GB addressable memory
+ */
+#define ACC101_HARQ_LAYOUT             (128 * 1024 * 1024)
+/* Mask used to calculate an index in an Info Ring array (not a byte offset) */
+#define ACC101_INFO_RING_MASK          (ACC101_INFO_RING_NUM_ENTRIES-1)
+/* Number of Virtual Functions ACC101 supports */
+#define ACC101_NUM_VFS                  16
+#define ACC101_NUM_QGRPS                8
+#define ACC101_NUM_QGRPS_PER_WORD       8
+#define ACC101_NUM_AQS                  16
+#define MAX_ENQ_BATCH_SIZE              255
+/* All ACC101 Registers alignment are 32bits = 4B */
+#define ACC101_BYTES_IN_WORD                 4
+
+#define ACC101_GRP_ID_SHIFT    10 /* Queue Index Hierarchy */
+#define ACC101_VF_ID_SHIFT     4  /* Queue Index Hierarchy */
+#define ACC101_QUEUE_ENABLE    0x80000000  /* Bit to mark Queue as Enabled */
+
+#define ACC101_NUM_ACCS       5
+#define ACC101_ACCMAP_0       0
+#define ACC101_ACCMAP_1       2
+#define ACC101_ACCMAP_2       1
+#define ACC101_ACCMAP_3       3
+#define ACC101_ACCMAP_4       4
+#define ACC101_PF_VAL         2
+
+/* max number of iterations to allocate memory block for all rings */
+#define ACC101_SW_RING_MEM_ALLOC_ATTEMPTS 5
+#define ACC101_MAX_QUEUE_DEPTH            1024
+#define ACC101_DMA_MAX_NUM_POINTERS       14
+#define ACC101_DMA_MAX_NUM_POINTERS_IN    7
+#define ACC101_DMA_DESC_PADDING           8
+#define ACC101_FCW_PADDING                12
+#define ACC101_COMPANION_PTRS             8
+
+#define ACC101_FCW_VER         2
+
+#define ACC101_LONG_WAIT        1000
+
+/* ACC101 DMA Descriptor triplet */
+struct acc101_dma_triplet {
+	uint64_t address;
+	uint32_t blen:20,
+		res0:4,
+		last:1,
+		dma_ext:1,
+		res1:2,
+		blkid:4;
+} __rte_packed;
+
+/* ACC101 DMA Response Descriptor */
+union acc101_dma_rsp_desc {
+	uint32_t val;
+	struct {
+		uint32_t crc_status:1,
+			synd_ok:1,
+			dma_err:1,
+			neg_stop:1,
+			fcw_err:1,
+			output_err:1,
+			input_err:1,
+			timestampEn:1,
+			iterCountFrac:8,
+			iter_cnt:8,
+			harq_failure:1,
+			rsrvd3:5,
+			sdone:1,
+			fdone:1;
+		uint32_t add_info_0;
+		uint32_t add_info_1;
+	};
+};
+
+
+/* ACC101 Queue Manager Enqueue PCI Register */
+union acc101_enqueue_reg_fmt {
+	uint32_t val;
+	struct {
+		uint32_t num_elem:8,
+			addr_offset:3,
+			rsrvd:1,
+			req_elem_addr:20;
+	};
+};
+
+/* FEC 4G Uplink Frame Control Word */
+struct __rte_packed acc101_fcw_td {
+	uint8_t fcw_ver:4,
+		num_maps:4; /* Unused */
+	uint8_t filler:6, /* Unused */
+		rsrvd0:1,
+		bypass_sb_deint:1;
+	uint16_t k_pos;
+	uint16_t k_neg; /* Unused */
+	uint8_t c_neg; /* Unused */
+	uint8_t c; /* Unused */
+	uint32_t ea; /* Unused */
+	uint32_t eb; /* Unused */
+	uint8_t cab; /* Unused */
+	uint8_t k0_start_col; /* Unused */
+	uint8_t rsrvd1;
+	uint8_t code_block_mode:1, /* Unused */
+		turbo_crc_type:1,
+		rsrvd2:3,
+		bypass_teq:1, /* Unused */
+		soft_output_en:1, /* Unused */
+		ext_td_cold_reg_en:1;
+	union { /* External Cold register */
+		uint32_t ext_td_cold_reg;
+		struct {
+			uint32_t min_iter:4, /* Unused */
+				max_iter:4,
+				ext_scale:5, /* Unused */
+				rsrvd3:3,
+				early_stop_en:1, /* Unused */
+				sw_soft_out_dis:1, /* Unused */
+				sw_et_cont:1, /* Unused */
+				sw_soft_out_saturation:1, /* Unused */
+				half_iter_on:1, /* Unused */
+				raw_decoder_input_on:1, /* Unused */
+				rsrvd4:10;
+		};
+	};
+};
+
+/* FEC 5GNR Uplink Frame Control Word */
+struct __rte_packed acc101_fcw_ld {
+	uint32_t FCWversion:4,
+		qm:4,
+		nfiller:11,
+		BG:1,
+		Zc:9,
+		res0:1,
+		synd_precoder:1,
+		synd_post:1;
+	uint32_t ncb:16,
+		k0:16;
+	uint32_t rm_e:24,
+		hcin_en:1,
+		hcout_en:1,
+		crc_select:1,
+		bypass_dec:1,
+		bypass_intlv:1,
+		so_en:1,
+		so_bypass_rm:1,
+		so_bypass_intlv:1;
+	uint32_t hcin_offset:16,
+		hcin_size0:16;
+	uint32_t hcin_size1:16,
+		hcin_decomp_mode:3,
+		llr_pack_mode:1,
+		hcout_comp_mode:3,
+		res2:1,
+		dec_convllr:4,
+		hcout_convllr:4;
+	uint32_t itmax:7,
+		itstop:1,
+		so_it:7,
+		res3:1,
+		hcout_offset:16;
+	uint32_t hcout_size0:16,
+		hcout_size1:16;
+	uint32_t gain_i:8,
+		gain_h:8,
+		negstop_th:16;
+	uint32_t negstop_it:7,
+		negstop_en:1,
+		res4:24;
+};
+
+/* FEC 4G Downlink Frame Control Word */
+struct __rte_packed acc101_fcw_te {
+	uint16_t k_neg;
+	uint16_t k_pos;
+	uint8_t c_neg;
+	uint8_t c;
+	uint8_t filler;
+	uint8_t cab;
+	uint32_t ea:17,
+		rsrvd0:15;
+	uint32_t eb:17,
+		rsrvd1:15;
+	uint16_t ncb_neg;
+	uint16_t ncb_pos;
+	uint8_t rv_idx0:2,
+		rsrvd2:2,
+		rv_idx1:2,
+		rsrvd3:2;
+	uint8_t bypass_rv_idx0:1,
+		bypass_rv_idx1:1,
+		bypass_rm:1,
+		rsrvd4:5;
+	uint8_t rsrvd5:1,
+		rsrvd6:3,
+		code_block_crc:1,
+		rsrvd7:3;
+	uint8_t code_block_mode:1,
+		rsrvd8:7;
+	uint64_t rsrvd9;
+};
+
+/* FEC 5GNR Downlink Frame Control Word */
+struct __rte_packed acc101_fcw_le {
+	uint32_t FCWversion:4,
+		qm:4,
+		nfiller:11,
+		BG:1,
+		Zc:9,
+		res0:3;
+	uint32_t ncb:16,
+		k0:16;
+	uint32_t rm_e:24,
+		res1:2,
+		crc_select:1,
+		res2:1,
+		bypass_intlv:1,
+		res3:3;
+	uint32_t res4_a:12,
+		mcb_count:3,
+		res4_b:17;
+	uint32_t res5;
+	uint32_t res6;
+	uint32_t res7;
+	uint32_t res8;
+};
+
+struct __rte_packed acc101_pad_ptr {
+	void *op_addr;
+	uint64_t pad1;  /* pad to 64 bits */
+};
+
+struct __rte_packed acc101_ptrs {
+	struct acc101_pad_ptr ptr[ACC101_COMPANION_PTRS];
+};
+
+/* ACC101 DMA Request Descriptor */
+struct __rte_packed acc101_dma_req_desc {
+	union {
+		struct{
+			uint32_t type:4,
+				rsrvd0:26,
+				sdone:1,
+				fdone:1;
+			uint32_t rsrvd1;
+			uint32_t rsrvd2;
+			uint32_t pass_param:8,
+				sdone_enable:1,
+				irq_enable:1,
+				timeStampEn:1,
+				res0:5,
+				numCBs:4,
+				res1:4,
+				m2dlen:4,
+				d2mlen:4;
+		};
+		struct{
+			uint32_t word0;
+			uint32_t word1;
+			uint32_t word2;
+			uint32_t word3;
+		};
+	};
+	struct acc101_dma_triplet data_ptrs[ACC101_DMA_MAX_NUM_POINTERS];
+
+	/* Virtual addresses used to retrieve SW context info */
+	union {
+		void *op_addr;
+		uint64_t pad1;  /* pad to 64 bits */
+	};
+	/*
+	 * Stores additional information needed for driver processing:
+	 * - last_desc_in_batch - flag used to mark last descriptor (CB)
+	 *                        in batch
+	 * - cbs_in_tb - stores information about total number of Code Blocks
+	 *               in currently processed Transport Block
+	 */
+	union {
+		struct {
+			union {
+				struct acc101_fcw_ld fcw_ld;
+				struct acc101_fcw_td fcw_td;
+				struct acc101_fcw_le fcw_le;
+				struct acc101_fcw_te fcw_te;
+				uint32_t pad2[ACC101_FCW_PADDING];
+			};
+			uint32_t last_desc_in_batch :8,
+				cbs_in_tb:8,
+				pad4 : 16;
+		};
+		uint64_t pad3[ACC101_DMA_DESC_PADDING]; /* pad to 64 bits */
+	};
+};
+
+/* ACC101 DMA Descriptor */
+union acc101_dma_desc {
+	struct acc101_dma_req_desc req;
+	union acc101_dma_rsp_desc rsp;
+	uint64_t atom_hdr;
+};
+
+
+/* Union describing Info Ring entry */
+union acc101_harq_layout_data {
+	uint32_t val;
+	struct {
+		uint16_t offset;
+		uint16_t size0;
+	};
+} __rte_packed;
+
+
+/* Union describing Info Ring entry */
+union acc101_info_ring_data {
+	uint32_t val;
+	struct {
+		union {
+			uint16_t detailed_info;
+			struct {
+				uint16_t aq_id: 4;
+				uint16_t qg_id: 4;
+				uint16_t vf_id: 6;
+				uint16_t reserved: 2;
+			};
+		};
+		uint16_t int_nb: 7;
+		uint16_t msi_0: 1;
+		uint16_t vf2pf: 6;
+		uint16_t loop: 1;
+		uint16_t valid: 1;
+	};
+} __rte_packed;
+
+struct acc101_registry_addr {
+	unsigned int dma_ring_dl5g_hi;
+	unsigned int dma_ring_dl5g_lo;
+	unsigned int dma_ring_ul5g_hi;
+	unsigned int dma_ring_ul5g_lo;
+	unsigned int dma_ring_dl4g_hi;
+	unsigned int dma_ring_dl4g_lo;
+	unsigned int dma_ring_ul4g_hi;
+	unsigned int dma_ring_ul4g_lo;
+	unsigned int ring_size;
+	unsigned int info_ring_hi;
+	unsigned int info_ring_lo;
+	unsigned int info_ring_en;
+	unsigned int info_ring_ptr;
+	unsigned int tail_ptrs_dl5g_hi;
+	unsigned int tail_ptrs_dl5g_lo;
+	unsigned int tail_ptrs_ul5g_hi;
+	unsigned int tail_ptrs_ul5g_lo;
+	unsigned int tail_ptrs_dl4g_hi;
+	unsigned int tail_ptrs_dl4g_lo;
+	unsigned int tail_ptrs_ul4g_hi;
+	unsigned int tail_ptrs_ul4g_lo;
+	unsigned int depth_log0_offset;
+	unsigned int depth_log1_offset;
+	unsigned int qman_group_func;
+	unsigned int ddr_range;
+	unsigned int pmon_ctrl_a;
+	unsigned int pmon_ctrl_b;
+};
+
+/* Structure holding registry addresses for PF */
+static const struct acc101_registry_addr pf_reg_addr = {
+	.dma_ring_dl5g_hi = HWPfDmaFec5GdlDescBaseHiRegVf,
+	.dma_ring_dl5g_lo = HWPfDmaFec5GdlDescBaseLoRegVf,
+	.dma_ring_ul5g_hi = HWPfDmaFec5GulDescBaseHiRegVf,
+	.dma_ring_ul5g_lo = HWPfDmaFec5GulDescBaseLoRegVf,
+	.dma_ring_dl4g_hi = HWPfDmaFec4GdlDescBaseHiRegVf,
+	.dma_ring_dl4g_lo = HWPfDmaFec4GdlDescBaseLoRegVf,
+	.dma_ring_ul4g_hi = HWPfDmaFec4GulDescBaseHiRegVf,
+	.dma_ring_ul4g_lo = HWPfDmaFec4GulDescBaseLoRegVf,
+	.ring_size = HWPfQmgrRingSizeVf,
+	.info_ring_hi = HWPfHiInfoRingBaseHiRegPf,
+	.info_ring_lo = HWPfHiInfoRingBaseLoRegPf,
+	.info_ring_en = HWPfHiInfoRingIntWrEnRegPf,
+	.info_ring_ptr = HWPfHiInfoRingPointerRegPf,
+	.tail_ptrs_dl5g_hi = HWPfDmaFec5GdlRespPtrHiRegVf,
+	.tail_ptrs_dl5g_lo = HWPfDmaFec5GdlRespPtrLoRegVf,
+	.tail_ptrs_ul5g_hi = HWPfDmaFec5GulRespPtrHiRegVf,
+	.tail_ptrs_ul5g_lo = HWPfDmaFec5GulRespPtrLoRegVf,
+	.tail_ptrs_dl4g_hi = HWPfDmaFec4GdlRespPtrHiRegVf,
+	.tail_ptrs_dl4g_lo = HWPfDmaFec4GdlRespPtrLoRegVf,
+	.tail_ptrs_ul4g_hi = HWPfDmaFec4GulRespPtrHiRegVf,
+	.tail_ptrs_ul4g_lo = HWPfDmaFec4GulRespPtrLoRegVf,
+	.depth_log0_offset = HWPfQmgrGrpDepthLog20Vf,
+	.depth_log1_offset = HWPfQmgrGrpDepthLog21Vf,
+	.qman_group_func = HWPfQmgrGrpFunction0,
+	.ddr_range = HWPfDmaVfDdrBaseRw,
+	.pmon_ctrl_a = HWPfPermonACntrlRegVf,
+	.pmon_ctrl_b = HWPfPermonBCntrlRegVf,
+};
+
+/* Structure holding registry addresses for VF */
+static const struct acc101_registry_addr vf_reg_addr = {
+	.dma_ring_dl5g_hi = HWVfDmaFec5GdlDescBaseHiRegVf,
+	.dma_ring_dl5g_lo = HWVfDmaFec5GdlDescBaseLoRegVf,
+	.dma_ring_ul5g_hi = HWVfDmaFec5GulDescBaseHiRegVf,
+	.dma_ring_ul5g_lo = HWVfDmaFec5GulDescBaseLoRegVf,
+	.dma_ring_dl4g_hi = HWVfDmaFec4GdlDescBaseHiRegVf,
+	.dma_ring_dl4g_lo = HWVfDmaFec4GdlDescBaseLoRegVf,
+	.dma_ring_ul4g_hi = HWVfDmaFec4GulDescBaseHiRegVf,
+	.dma_ring_ul4g_lo = HWVfDmaFec4GulDescBaseLoRegVf,
+	.ring_size = HWVfQmgrRingSizeVf,
+	.info_ring_hi = HWVfHiInfoRingBaseHiVf,
+	.info_ring_lo = HWVfHiInfoRingBaseLoVf,
+	.info_ring_en = HWVfHiInfoRingIntWrEnVf,
+	.info_ring_ptr = HWVfHiInfoRingPointerVf,
+	.tail_ptrs_dl5g_hi = HWVfDmaFec5GdlRespPtrHiRegVf,
+	.tail_ptrs_dl5g_lo = HWVfDmaFec5GdlRespPtrLoRegVf,
+	.tail_ptrs_ul5g_hi = HWVfDmaFec5GulRespPtrHiRegVf,
+	.tail_ptrs_ul5g_lo = HWVfDmaFec5GulRespPtrLoRegVf,
+	.tail_ptrs_dl4g_hi = HWVfDmaFec4GdlRespPtrHiRegVf,
+	.tail_ptrs_dl4g_lo = HWVfDmaFec4GdlRespPtrLoRegVf,
+	.tail_ptrs_ul4g_hi = HWVfDmaFec4GulRespPtrHiRegVf,
+	.tail_ptrs_ul4g_lo = HWVfDmaFec4GulRespPtrLoRegVf,
+	.depth_log0_offset = HWVfQmgrGrpDepthLog20Vf,
+	.depth_log1_offset = HWVfQmgrGrpDepthLog21Vf,
+	.qman_group_func = HWVfQmgrGrpFunction0Vf,
+	.ddr_range = HWVfDmaDdrBaseRangeRoVf,
+	.pmon_ctrl_a = HWVfPmACntrlRegVf,
+	.pmon_ctrl_b = HWVfPmBCntrlRegVf,
+};
+
 /* Private data structure for each ACC101 device */
 struct acc101_device {
 	void *mmio_base;  /**< Base address of MMIO registers (BAR0) */
@@ -46,6 +497,8 @@ struct acc101_device {
 	 * on how many queues are enabled with configure()
 	 */
 	uint32_t sw_ring_max_depth;
+	/* Bitmap capturing which Queues have already been assigned */
+	uint16_t q_assigned_bit_map[ACC101_NUM_QGRPS];
 	bool pf_device; /**< True if this is a PF ACC101 device */
 	bool configured; /**< True if this ACC101 device is configured */
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 3/9] baseband/acc101: add info get function
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 1/9] baseband/acc101: introduce PMD for ACC101 Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 2/9] baseband/acc101: add HW register definition Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 4/9] baseband/acc101: add queue configuration Nicolas Chautru
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Add the info get function to allow to query
device null capabilities.
Linking to bbdev-test.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 app/test-bbdev/meson.build               |   3 +
 drivers/baseband/acc101/rte_acc101_cfg.h |  96 +++++++++++++
 drivers/baseband/acc101/rte_acc101_pmd.c | 226 +++++++++++++++++++++++++++++++
 drivers/baseband/acc101/rte_acc101_pmd.h |   2 +
 4 files changed, 327 insertions(+)
 create mode 100644 drivers/baseband/acc101/rte_acc101_cfg.h

diff --git a/app/test-bbdev/meson.build b/app/test-bbdev/meson.build
index 76d4c26..9cbee5a 100644
--- a/app/test-bbdev/meson.build
+++ b/app/test-bbdev/meson.build
@@ -23,6 +23,9 @@ endif
 if dpdk_conf.has('RTE_BASEBAND_ACC100')
     deps += ['baseband_acc100']
 endif
+if dpdk_conf.has('RTE_BASEBAND_ACC101')
+    deps += ['baseband_acc101']
+endif
 if dpdk_conf.has('RTE_LIBRTE_PMD_BBDEV_LA12XX')
     deps += ['baseband_la12xx']
 endif
diff --git a/drivers/baseband/acc101/rte_acc101_cfg.h b/drivers/baseband/acc101/rte_acc101_cfg.h
new file mode 100644
index 0000000..4881cd6
--- /dev/null
+++ b/drivers/baseband/acc101/rte_acc101_cfg.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _RTE_ACC101_CFG_H_
+#define _RTE_ACC101_CFG_H_
+
+/**
+ * @file rte_acc101_cfg.h
+ *
+ * Functions for configuring ACC101 HW, exposed directly to applications.
+ * Configuration related to encoding/decoding is done through the
+ * librte_bbdev library.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ */
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+/**< Number of Virtual Functions ACC101 supports */
+#define RTE_ACC101_NUM_VFS 16
+
+/**
+ * Definition of Queue Topology for ACC101 Configuration
+ * Some level of details is abstracted out to expose a clean interface
+ * given that comprehensive flexibility is not required
+ */
+struct rte_acc101_queue_topology {
+	/** Number of QGroups in incremental order of priority */
+	uint16_t num_qgroups;
+	/**
+	 * All QGroups have the same number of AQs here.
+	 * Note : Could be made a 16-array if more flexibility is really
+	 * required
+	 */
+	uint16_t num_aqs_per_groups;
+	/**
+	 * Depth of the AQs is the same of all QGroups here. Log2 Enum : 2^N
+	 * Note : Could be made a 16-array if more flexibility is really
+	 * required
+	 */
+	uint16_t aq_depth_log2;
+	/**
+	 * Index of the first Queue Group Index - assuming contiguity
+	 * Initialized as -1
+	 */
+	int8_t first_qgroup_index;
+};
+
+/**
+ * Definition of Arbitration related parameters for ACC101 Configuration
+ */
+struct rte_acc101_arbitration {
+	/** Default Weight for VF Fairness Arbitration */
+	uint16_t round_robin_weight;
+	uint32_t gbr_threshold1; /**< Guaranteed Bitrate Threshold 1 */
+	uint32_t gbr_threshold2; /**< Guaranteed Bitrate Threshold 2 */
+};
+
+/**
+ * Structure to pass ACC101 configuration.
+ * Note: all VF Bundles will have the same configuration.
+ */
+struct rte_acc101_conf {
+	bool pf_mode_en; /**< 1 if PF is used for dataplane, 0 for VFs */
+	/** 1 if input '1' bit is represented by a positive LLR value, 0 if '1'
+	 * bit is represented by a negative value.
+	 */
+	bool input_pos_llr_1_bit;
+	/** 1 if output '1' bit is represented by a positive value, 0 if '1'
+	 * bit is represented by a negative value.
+	 */
+	bool output_pos_llr_1_bit;
+	uint16_t num_vf_bundles; /**< Number of VF bundles to setup */
+	/** Queue topology for each operation type */
+	struct rte_acc101_queue_topology q_ul_4g;
+	struct rte_acc101_queue_topology q_dl_4g;
+	struct rte_acc101_queue_topology q_ul_5g;
+	struct rte_acc101_queue_topology q_dl_5g;
+	/** Arbitration configuration for each operation type */
+	struct rte_acc101_arbitration arb_ul_4g[RTE_ACC101_NUM_VFS];
+	struct rte_acc101_arbitration arb_dl_4g[RTE_ACC101_NUM_VFS];
+	struct rte_acc101_arbitration arb_ul_5g[RTE_ACC101_NUM_VFS];
+	struct rte_acc101_arbitration arb_dl_5g[RTE_ACC101_NUM_VFS];
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_ACC101_CFG_H_ */
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index dff3834..9518a9e 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -29,6 +29,187 @@
 RTE_LOG_REGISTER_DEFAULT(acc101_logtype, NOTICE);
 #endif
 
+/* Read a register of a ACC101 device */
+static inline uint32_t
+acc101_reg_read(struct acc101_device *d, uint32_t offset)
+{
+	void *reg_addr = RTE_PTR_ADD(d->mmio_base, offset);
+	uint32_t ret = *((volatile uint32_t *)(reg_addr));
+	return rte_le_to_cpu_32(ret);
+}
+
+/* Calculate the offset of the enqueue register */
+static inline uint32_t
+queue_offset(bool pf_device, uint8_t vf_id, uint8_t qgrp_id, uint16_t aq_id)
+{
+	if (pf_device)
+		return ((vf_id << 12) + (qgrp_id << 7) + (aq_id << 3) +
+				HWPfQmgrIngressAq);
+	else
+		return ((qgrp_id << 7) + (aq_id << 3) +
+				HWVfQmgrIngressAq);
+}
+
+enum {UL_4G = 0, UL_5G, DL_4G, DL_5G, NUM_ACC};
+
+/* Return the queue topology for a Queue Group Index */
+static inline void
+qtopFromAcc(struct rte_acc101_queue_topology **qtop, int acc_enum,
+		struct rte_acc101_conf *acc101_conf)
+{
+	struct rte_acc101_queue_topology *p_qtop;
+	p_qtop = NULL;
+	switch (acc_enum) {
+	case UL_4G:
+		p_qtop = &(acc101_conf->q_ul_4g);
+		break;
+	case UL_5G:
+		p_qtop = &(acc101_conf->q_ul_5g);
+		break;
+	case DL_4G:
+		p_qtop = &(acc101_conf->q_dl_4g);
+		break;
+	case DL_5G:
+		p_qtop = &(acc101_conf->q_dl_5g);
+		break;
+	default:
+		/* NOTREACHED */
+		rte_bbdev_log(ERR, "Unexpected error evaluating qtopFromAcc");
+		break;
+	}
+	*qtop = p_qtop;
+}
+
+static void
+initQTop(struct rte_acc101_conf *acc101_conf)
+{
+	acc101_conf->q_ul_4g.num_aqs_per_groups = 0;
+	acc101_conf->q_ul_4g.num_qgroups = 0;
+	acc101_conf->q_ul_4g.first_qgroup_index = -1;
+	acc101_conf->q_ul_5g.num_aqs_per_groups = 0;
+	acc101_conf->q_ul_5g.num_qgroups = 0;
+	acc101_conf->q_ul_5g.first_qgroup_index = -1;
+	acc101_conf->q_dl_4g.num_aqs_per_groups = 0;
+	acc101_conf->q_dl_4g.num_qgroups = 0;
+	acc101_conf->q_dl_4g.first_qgroup_index = -1;
+	acc101_conf->q_dl_5g.num_aqs_per_groups = 0;
+	acc101_conf->q_dl_5g.num_qgroups = 0;
+	acc101_conf->q_dl_5g.first_qgroup_index = -1;
+}
+
+static inline void
+updateQtop(uint8_t acc, uint8_t qg, struct rte_acc101_conf *acc101_conf,
+		struct acc101_device *d) {
+	uint32_t reg;
+	struct rte_acc101_queue_topology *q_top = NULL;
+	qtopFromAcc(&q_top, acc, acc101_conf);
+	if (unlikely(q_top == NULL))
+		return;
+	uint16_t aq;
+	q_top->num_qgroups++;
+	if (q_top->first_qgroup_index == -1) {
+		q_top->first_qgroup_index = qg;
+		/* Can be optimized to assume all are enabled by default */
+		reg = acc101_reg_read(d, queue_offset(d->pf_device,
+				0, qg, ACC101_NUM_AQS - 1));
+		if (reg & ACC101_QUEUE_ENABLE) {
+			q_top->num_aqs_per_groups = ACC101_NUM_AQS;
+			return;
+		}
+		q_top->num_aqs_per_groups = 0;
+		for (aq = 0; aq < ACC101_NUM_AQS; aq++) {
+			reg = acc101_reg_read(d, queue_offset(d->pf_device,
+					0, qg, aq));
+			if (reg & ACC101_QUEUE_ENABLE)
+				q_top->num_aqs_per_groups++;
+		}
+	}
+}
+
+/* Fetch configuration enabled for the PF/VF using MMIO Read (slow) */
+static inline void
+fetch_acc101_config(struct rte_bbdev *dev)
+{
+	struct acc101_device *d = dev->data->dev_private;
+	struct rte_acc101_conf *acc101_conf = &d->acc101_conf;
+	const struct acc101_registry_addr *reg_addr;
+	uint8_t acc, qg;
+	uint32_t reg, reg_aq, reg_len0, reg_len1;
+	uint32_t reg_mode;
+
+	/* No need to retrieve the configuration is already done */
+	if (d->configured)
+		return;
+
+	/* Choose correct registry addresses for the device type */
+	if (d->pf_device)
+		reg_addr = &pf_reg_addr;
+	else
+		reg_addr = &vf_reg_addr;
+
+	d->ddr_size = (1 + acc101_reg_read(d, reg_addr->ddr_range)) << 10;
+
+	/* Single VF Bundle by VF */
+	acc101_conf->num_vf_bundles = 1;
+	initQTop(acc101_conf);
+
+	struct rte_acc101_queue_topology *q_top = NULL;
+	int qman_func_id[ACC101_NUM_ACCS] = {ACC101_ACCMAP_0, ACC101_ACCMAP_1,
+			ACC101_ACCMAP_2, ACC101_ACCMAP_3, ACC101_ACCMAP_4};
+	reg = acc101_reg_read(d, reg_addr->qman_group_func);
+	for (qg = 0; qg < ACC101_NUM_QGRPS_PER_WORD; qg++) {
+		reg_aq = acc101_reg_read(d,
+				queue_offset(d->pf_device, 0, qg, 0));
+		if (reg_aq & ACC101_QUEUE_ENABLE) {
+			uint32_t idx = (reg >> (qg * 4)) & 0x7;
+			if (idx < ACC101_NUM_ACCS) {
+				acc = qman_func_id[idx];
+				updateQtop(acc, qg, acc101_conf, d);
+			}
+		}
+	}
+
+	/* Check the depth of the AQs*/
+	reg_len0 = acc101_reg_read(d, reg_addr->depth_log0_offset);
+	reg_len1 = acc101_reg_read(d, reg_addr->depth_log1_offset);
+	for (acc = 0; acc < NUM_ACC; acc++) {
+		qtopFromAcc(&q_top, acc, acc101_conf);
+		if (q_top->first_qgroup_index < ACC101_NUM_QGRPS_PER_WORD)
+			q_top->aq_depth_log2 = (reg_len0 >>
+					(q_top->first_qgroup_index * 4))
+					& 0xF;
+		else
+			q_top->aq_depth_log2 = (reg_len1 >>
+					((q_top->first_qgroup_index -
+					ACC101_NUM_QGRPS_PER_WORD) * 4))
+					& 0xF;
+	}
+
+	/* Read PF mode */
+	if (d->pf_device) {
+		reg_mode = acc101_reg_read(d, HWPfHiPfMode);
+		acc101_conf->pf_mode_en = (reg_mode == ACC101_PF_VAL) ? 1 : 0;
+	}
+
+	rte_bbdev_log_debug(
+			"%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u\n",
+			(d->pf_device) ? "PF" : "VF",
+			(acc101_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
+			(acc101_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
+			acc101_conf->q_ul_4g.num_qgroups,
+			acc101_conf->q_dl_4g.num_qgroups,
+			acc101_conf->q_ul_5g.num_qgroups,
+			acc101_conf->q_dl_5g.num_qgroups,
+			acc101_conf->q_ul_4g.num_aqs_per_groups,
+			acc101_conf->q_dl_4g.num_aqs_per_groups,
+			acc101_conf->q_ul_5g.num_aqs_per_groups,
+			acc101_conf->q_dl_5g.num_aqs_per_groups,
+			acc101_conf->q_ul_4g.aq_depth_log2,
+			acc101_conf->q_dl_4g.aq_depth_log2,
+			acc101_conf->q_ul_5g.aq_depth_log2,
+			acc101_conf->q_dl_5g.aq_depth_log2);
+}
+
 /* Free memory used for software rings */
 static int
 acc101_dev_close(struct rte_bbdev *dev)
@@ -37,8 +218,53 @@
 	return 0;
 }
 
+/* Get ACC101 device info */
+static void
+acc101_dev_info_get(struct rte_bbdev *dev,
+		struct rte_bbdev_driver_info *dev_info)
+{
+	struct acc101_device *d = dev->data->dev_private;
+	static const struct rte_bbdev_op_cap bbdev_capabilities[] = {
+		RTE_BBDEV_END_OF_CAPABILITIES_LIST()
+	};
+
+	static struct rte_bbdev_queue_conf default_queue_conf;
+	default_queue_conf.socket = dev->data->socket_id;
+	default_queue_conf.queue_size = ACC101_MAX_QUEUE_DEPTH;
+
+	dev_info->driver_name = dev->device->driver->name;
+
+	/* Read and save the populated config from ACC101 registers */
+	fetch_acc101_config(dev);
+	/* This isn't ideal because it reports the maximum number of queues but
+	 * does not provide info on how many can be uplink/downlink or different
+	 * priorities
+	 */
+	dev_info->max_num_queues =
+			d->acc101_conf.q_dl_5g.num_aqs_per_groups *
+			d->acc101_conf.q_dl_5g.num_qgroups +
+			d->acc101_conf.q_ul_5g.num_aqs_per_groups *
+			d->acc101_conf.q_ul_5g.num_qgroups +
+			d->acc101_conf.q_dl_4g.num_aqs_per_groups *
+			d->acc101_conf.q_dl_4g.num_qgroups +
+			d->acc101_conf.q_ul_4g.num_aqs_per_groups *
+			d->acc101_conf.q_ul_4g.num_qgroups;
+	dev_info->queue_size_lim = ACC101_MAX_QUEUE_DEPTH;
+	dev_info->hardware_accelerated = true;
+	dev_info->max_dl_queue_priority =
+			d->acc101_conf.q_dl_4g.num_qgroups - 1;
+	dev_info->max_ul_queue_priority =
+			d->acc101_conf.q_ul_4g.num_qgroups - 1;
+	dev_info->default_queue_conf = default_queue_conf;
+	dev_info->cpu_flag_reqs = NULL;
+	dev_info->min_alignment = 64;
+	dev_info->capabilities = bbdev_capabilities;
+	dev_info->harq_buffer_size = 0;
+}
+
 static const struct rte_bbdev_ops acc101_bbdev_ops = {
 	.close = acc101_dev_close,
+	.info_get = acc101_dev_info_get,
 };
 
 /* ACC101 PCI PF address map */
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
index 66641f3..9c0e711 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.h
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -7,6 +7,7 @@
 
 #include "acc101_pf_enum.h"
 #include "acc101_vf_enum.h"
+#include "rte_acc101_cfg.h"
 
 /* Helper macro for logging */
 #define rte_bbdev_log(level, fmt, ...) \
@@ -497,6 +498,7 @@ struct acc101_device {
 	 * on how many queues are enabled with configure()
 	 */
 	uint32_t sw_ring_max_depth;
+	struct rte_acc101_conf acc101_conf; /* ACC101 Initial configuration */
 	/* Bitmap capturing which Queues have already been assigned */
 	uint16_t q_assigned_bit_map[ACC101_NUM_QGRPS];
 	bool pf_device; /**< True if this is a PF ACC101 device */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 4/9] baseband/acc101: add queue configuration
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (2 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 3/9] baseband/acc101: add info get function Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 5/9] baseband/acc101: add LDPC processing Nicolas Chautru
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Adding functions to create and configure queues.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 drivers/baseband/acc101/rte_acc101_pmd.c | 543 ++++++++++++++++++++++++++++++-
 drivers/baseband/acc101/rte_acc101_pmd.h |  46 +++
 2 files changed, 588 insertions(+), 1 deletion(-)

diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index 9518a9e..e490e97 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -29,6 +29,22 @@
 RTE_LOG_REGISTER_DEFAULT(acc101_logtype, NOTICE);
 #endif
 
+/* Write to MMIO register address */
+static inline void
+mmio_write(void *addr, uint32_t value)
+{
+	*((volatile uint32_t *)(addr)) = rte_cpu_to_le_32(value);
+}
+
+/* Write a register of a ACC101 device */
+static inline void
+acc101_reg_write(struct acc101_device *d, uint32_t offset, uint32_t value)
+{
+	void *reg_addr = RTE_PTR_ADD(d->mmio_base, offset);
+	mmio_write(reg_addr, value);
+	usleep(ACC101_LONG_WAIT);
+}
+
 /* Read a register of a ACC101 device */
 static inline uint32_t
 acc101_reg_read(struct acc101_device *d, uint32_t offset)
@@ -38,6 +54,22 @@
 	return rte_le_to_cpu_32(ret);
 }
 
+/* Basic Implementation of Log2 for exact 2^N */
+static inline uint32_t
+log2_basic(uint32_t value)
+{
+	return (value == 0) ? 0 : rte_bsf32(value);
+}
+
+/* Calculate memory alignment offset assuming alignment is 2^N */
+static inline uint32_t
+calc_mem_alignment_offset(void *unaligned_virt_mem, uint32_t alignment)
+{
+	rte_iova_t unaligned_phy_mem = rte_malloc_virt2iova(unaligned_virt_mem);
+	return (uint32_t)(alignment -
+			(unaligned_phy_mem & (alignment-1)));
+}
+
 /* Calculate the offset of the enqueue register */
 static inline uint32_t
 queue_offset(bool pf_device, uint8_t vf_id, uint8_t qgrp_id, uint16_t aq_id)
@@ -184,6 +216,9 @@
 					ACC101_NUM_QGRPS_PER_WORD) * 4))
 					& 0xF;
 	}
+	/* Start Pmon */
+	acc101_reg_write(d, reg_addr->pmon_ctrl_a, 0x2);
+	acc101_reg_write(d, reg_addr->pmon_ctrl_b, 0x2);
 
 	/* Read PF mode */
 	if (d->pf_device) {
@@ -210,11 +245,513 @@
 			acc101_conf->q_dl_5g.aq_depth_log2);
 }
 
+static inline void
+acc101_vf2pf(struct acc101_device *d, unsigned int payload)
+{
+	acc101_reg_write(d, HWVfHiVfToPfDbellVf, payload);
+}
+
+static void
+free_base_addresses(void **base_addrs, int size)
+{
+	int i;
+	for (i = 0; i < size; i++)
+		rte_free(base_addrs[i]);
+}
+
+static inline uint32_t
+get_desc_len(void)
+{
+	return sizeof(union acc101_dma_desc);
+}
+
+/* Allocate the 2 * 64MB block for the sw rings */
+static int
+alloc_2x64mb_sw_rings_mem(struct rte_bbdev *dev, struct acc101_device *d,
+		int socket)
+{
+	uint32_t sw_ring_size = ACC101_SIZE_64MBYTE;
+	d->sw_rings_base = rte_zmalloc_socket(dev->device->driver->name,
+			2 * sw_ring_size, RTE_CACHE_LINE_SIZE, socket);
+	if (d->sw_rings_base == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate memory for %s:%u",
+				dev->device->driver->name,
+				dev->data->dev_id);
+		return -ENOMEM;
+	}
+	uint32_t next_64mb_align_offset = calc_mem_alignment_offset(
+			d->sw_rings_base, ACC101_SIZE_64MBYTE);
+	d->sw_rings = RTE_PTR_ADD(d->sw_rings_base, next_64mb_align_offset);
+	d->sw_rings_iova = rte_malloc_virt2iova(d->sw_rings_base) +
+			next_64mb_align_offset;
+	d->sw_ring_size = ACC101_MAX_QUEUE_DEPTH * get_desc_len();
+	d->sw_ring_max_depth = ACC101_MAX_QUEUE_DEPTH;
+
+	return 0;
+}
+
+/* Attempt to allocate minimised memory space for sw rings */
+static void
+alloc_sw_rings_min_mem(struct rte_bbdev *dev, struct acc101_device *d,
+		uint16_t num_queues, int socket)
+{
+	rte_iova_t sw_rings_base_iova, next_64mb_align_addr_iova;
+	uint32_t next_64mb_align_offset;
+	rte_iova_t sw_ring_iova_end_addr;
+	void *base_addrs[ACC101_SW_RING_MEM_ALLOC_ATTEMPTS];
+	void *sw_rings_base;
+	int i = 0;
+	uint32_t q_sw_ring_size = ACC101_MAX_QUEUE_DEPTH * get_desc_len();
+	uint32_t dev_sw_ring_size = q_sw_ring_size * num_queues;
+	/* Free first in case this is a reconfiguration */
+	rte_free(d->sw_rings_base);
+
+	/* Find an aligned block of memory to store sw rings */
+	while (i < ACC101_SW_RING_MEM_ALLOC_ATTEMPTS) {
+		/*
+		 * sw_ring allocated memory is guaranteed to be aligned to
+		 * q_sw_ring_size at the condition that the requested size is
+		 * less than the page size
+		 */
+		sw_rings_base = rte_zmalloc_socket(
+				dev->device->driver->name,
+				dev_sw_ring_size, q_sw_ring_size, socket);
+
+		if (sw_rings_base == NULL) {
+			rte_bbdev_log(ERR,
+					"Failed to allocate memory for %s:%u",
+					dev->device->driver->name,
+					dev->data->dev_id);
+			break;
+		}
+
+		sw_rings_base_iova = rte_malloc_virt2iova(sw_rings_base);
+		next_64mb_align_offset = calc_mem_alignment_offset(
+				sw_rings_base, ACC101_SIZE_64MBYTE);
+		next_64mb_align_addr_iova = sw_rings_base_iova +
+				next_64mb_align_offset;
+		sw_ring_iova_end_addr = sw_rings_base_iova + dev_sw_ring_size;
+
+		/* Check if the end of the sw ring memory block is before the
+		 * start of next 64MB aligned mem address
+		 */
+		if (sw_ring_iova_end_addr < next_64mb_align_addr_iova) {
+			d->sw_rings_iova = sw_rings_base_iova;
+			d->sw_rings = sw_rings_base;
+			d->sw_rings_base = sw_rings_base;
+			d->sw_ring_size = q_sw_ring_size;
+			d->sw_ring_max_depth = ACC101_MAX_QUEUE_DEPTH;
+			break;
+		}
+		/* Store the address of the unaligned mem block */
+		base_addrs[i] = sw_rings_base;
+		i++;
+	}
+
+	/* Free all unaligned blocks of mem allocated in the loop */
+	free_base_addresses(base_addrs, i);
+}
+
+/* Allocate 64MB memory used for all software rings */
+static int
+acc101_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id)
+{
+	uint32_t phys_low, phys_high, value;
+	struct acc101_device *d = dev->data->dev_private;
+	const struct acc101_registry_addr *reg_addr;
+
+	if (d->pf_device && !d->acc101_conf.pf_mode_en) {
+		rte_bbdev_log(NOTICE,
+				"%s has PF mode disabled. This PF can't be used.",
+				dev->data->name);
+		return -ENODEV;
+	}
+
+	alloc_sw_rings_min_mem(dev, d, num_queues, socket_id);
+
+	/* If minimal memory space approach failed, then allocate
+	 * the 2 * 64MB block for the sw rings
+	 */
+	if (d->sw_rings == NULL)
+		alloc_2x64mb_sw_rings_mem(dev, d, socket_id);
+
+	if (d->sw_rings == NULL) {
+		rte_bbdev_log(NOTICE,
+				"Failure allocating sw_rings memory");
+		return -ENODEV;
+	}
+
+	/* Configure ACC101 with the base address for DMA descriptor rings
+	 * Same descriptor rings used for UL and DL DMA Engines
+	 * Note : Assuming only VF0 bundle is used for PF mode
+	 */
+	phys_high = (uint32_t)(d->sw_rings_iova >> 32);
+	phys_low  = (uint32_t)(d->sw_rings_iova & ~(ACC101_SIZE_64MBYTE-1));
+
+	/* Choose correct registry addresses for the device type */
+	if (d->pf_device)
+		reg_addr = &pf_reg_addr;
+	else
+		reg_addr = &vf_reg_addr;
+
+	/* Read the populated cfg from ACC101 registers */
+	fetch_acc101_config(dev);
+
+	/* Release AXI from PF with 2 ms threshold */
+	if (d->pf_device) {
+		usleep(2000);
+		acc101_reg_write(d, HWPfDmaAxiControl, 1);
+	}
+
+	acc101_reg_write(d, reg_addr->dma_ring_ul5g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->dma_ring_ul5g_lo, phys_low);
+	acc101_reg_write(d, reg_addr->dma_ring_dl5g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->dma_ring_dl5g_lo, phys_low);
+	acc101_reg_write(d, reg_addr->dma_ring_ul4g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->dma_ring_ul4g_lo, phys_low);
+	acc101_reg_write(d, reg_addr->dma_ring_dl4g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->dma_ring_dl4g_lo, phys_low);
+
+	/*
+	 * Configure Ring Size to the max queue ring size
+	 * (used for wrapping purpose)
+	 */
+	value = log2_basic(d->sw_ring_size / 64);
+	acc101_reg_write(d, reg_addr->ring_size, value);
+
+	/* Configure tail pointer for use when SDONE enabled */
+	if (d->tail_ptrs == NULL)
+		d->tail_ptrs = rte_zmalloc_socket(
+				dev->device->driver->name,
+				ACC101_NUM_QGRPS * ACC101_NUM_AQS * sizeof(uint32_t),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (d->tail_ptrs == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate tail ptr for %s:%u",
+				dev->device->driver->name,
+				dev->data->dev_id);
+		rte_free(d->sw_rings);
+		return -ENOMEM;
+	}
+	d->tail_ptr_iova = rte_malloc_virt2iova(d->tail_ptrs);
+
+	phys_high = (uint32_t)(d->tail_ptr_iova >> 32);
+	phys_low  = (uint32_t)(d->tail_ptr_iova);
+	acc101_reg_write(d, reg_addr->tail_ptrs_ul5g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->tail_ptrs_ul5g_lo, phys_low);
+	acc101_reg_write(d, reg_addr->tail_ptrs_dl5g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->tail_ptrs_dl5g_lo, phys_low);
+	acc101_reg_write(d, reg_addr->tail_ptrs_ul4g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->tail_ptrs_ul4g_lo, phys_low);
+	acc101_reg_write(d, reg_addr->tail_ptrs_dl4g_hi, phys_high);
+	acc101_reg_write(d, reg_addr->tail_ptrs_dl4g_lo, phys_low);
+
+	if (d->harq_layout == NULL)
+		d->harq_layout = rte_zmalloc_socket("HARQ Layout",
+				ACC101_HARQ_LAYOUT * sizeof(*d->harq_layout),
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+	if (d->harq_layout == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate harq_layout for %s:%u",
+				dev->device->driver->name,
+				dev->data->dev_id);
+		rte_free(d->sw_rings);
+		return -ENOMEM;
+	}
+
+	/* Mark as configured properly */
+	d->configured = true;
+	acc101_vf2pf(d, ACC101_VF2PF_USING_VF);
+
+	rte_bbdev_log_debug(
+			"ACC101 (%s) configured  sw_rings = %p, sw_rings_iova = %#"
+			PRIx64, dev->data->name, d->sw_rings, d->sw_rings_iova);
+
+	return 0;
+}
+
 /* Free memory used for software rings */
 static int
 acc101_dev_close(struct rte_bbdev *dev)
 {
-	RTE_SET_USED(dev);
+	struct acc101_device *d = dev->data->dev_private;
+	if (d->sw_rings_base != NULL) {
+		rte_free(d->tail_ptrs);
+		rte_free(d->sw_rings_base);
+		rte_free(d->harq_layout);
+		d->sw_rings_base = NULL;
+	}
+	return 0;
+}
+
+/**
+ * Report a ACC101 queue index which is free
+ * Return 0 to 16k for a valid queue_idx or -1 when no queue is available
+ * Note : Only supporting VF0 Bundle for PF mode
+ */
+static int
+acc101_find_free_queue_idx(struct rte_bbdev *dev,
+		const struct rte_bbdev_queue_conf *conf)
+{
+	struct acc101_device *d = dev->data->dev_private;
+	int op_2_acc[5] = {0, UL_4G, DL_4G, UL_5G, DL_5G};
+	int acc = op_2_acc[conf->op_type];
+	struct rte_acc101_queue_topology *qtop = NULL;
+
+	qtopFromAcc(&qtop, acc, &(d->acc101_conf));
+	if (qtop == NULL)
+		return -1;
+	/* Identify matching QGroup Index which are sorted in priority order */
+	uint16_t group_idx = qtop->first_qgroup_index;
+	group_idx += conf->priority;
+	if (group_idx >= ACC101_NUM_QGRPS ||
+			conf->priority >= qtop->num_qgroups) {
+		rte_bbdev_log(INFO, "Invalid Priority on %s, priority %u",
+				dev->data->name, conf->priority);
+		return -1;
+	}
+	/* Find a free AQ_idx  */
+	uint16_t aq_idx;
+	for (aq_idx = 0; aq_idx < qtop->num_aqs_per_groups; aq_idx++) {
+		if (((d->q_assigned_bit_map[group_idx] >> aq_idx) & 0x1) == 0) {
+			/* Mark the Queue as assigned */
+			d->q_assigned_bit_map[group_idx] |= (1 << aq_idx);
+			/* Report the AQ Index */
+			return (group_idx << ACC101_GRP_ID_SHIFT) + aq_idx;
+		}
+	}
+	rte_bbdev_log(INFO, "Failed to find free queue on %s, priority %u",
+			dev->data->name, conf->priority);
+	return -1;
+}
+
+/* Setup ACC101 queue */
+static int
+acc101_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
+		const struct rte_bbdev_queue_conf *conf)
+{
+	struct acc101_device *d = dev->data->dev_private;
+	struct acc101_queue *q;
+	int16_t q_idx;
+
+	if (d == NULL) {
+		rte_bbdev_log(ERR, "Undefined device");
+		return -ENODEV;
+	}
+	/* Allocate the queue data structure. */
+	q = rte_zmalloc_socket(dev->device->driver->name, sizeof(*q),
+			RTE_CACHE_LINE_SIZE, conf->socket);
+	if (q == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate queue memory");
+		return -ENOMEM;
+	}
+
+	q->d = d;
+	q->ring_addr = RTE_PTR_ADD(d->sw_rings, (d->sw_ring_size * queue_id));
+	q->ring_addr_iova = d->sw_rings_iova + (d->sw_ring_size * queue_id);
+
+	/* Prepare the Ring with default descriptor format */
+	union acc101_dma_desc *desc = NULL;
+	unsigned int desc_idx, b_idx;
+	int fcw_len = (conf->op_type == RTE_BBDEV_OP_LDPC_ENC ?
+		ACC101_FCW_LE_BLEN : (conf->op_type == RTE_BBDEV_OP_TURBO_DEC ?
+		ACC101_FCW_TD_BLEN : ACC101_FCW_LD_BLEN));
+
+	for (desc_idx = 0; desc_idx < d->sw_ring_max_depth; desc_idx++) {
+		desc = q->ring_addr + desc_idx;
+		desc->req.word0 = ACC101_DMA_DESC_TYPE;
+		desc->req.word1 = 0; /**< Timestamp */
+		desc->req.word2 = 0;
+		desc->req.word3 = 0;
+		uint64_t fcw_offset = (desc_idx << 8) + ACC101_DESC_FCW_OFFSET;
+		desc->req.data_ptrs[0].address = q->ring_addr_iova + fcw_offset;
+		desc->req.data_ptrs[0].blen = fcw_len;
+		desc->req.data_ptrs[0].blkid = ACC101_DMA_BLKID_FCW;
+		desc->req.data_ptrs[0].last = 0;
+		desc->req.data_ptrs[0].dma_ext = 0;
+		for (b_idx = 1; b_idx < ACC101_DMA_MAX_NUM_POINTERS - 1;
+				b_idx++) {
+			desc->req.data_ptrs[b_idx].blkid = ACC101_DMA_BLKID_IN;
+			desc->req.data_ptrs[b_idx].last = 1;
+			desc->req.data_ptrs[b_idx].dma_ext = 0;
+			b_idx++;
+			desc->req.data_ptrs[b_idx].blkid =
+					ACC101_DMA_BLKID_OUT_ENC;
+			desc->req.data_ptrs[b_idx].last = 1;
+			desc->req.data_ptrs[b_idx].dma_ext = 0;
+		}
+		/* Preset some fields of LDPC FCW */
+		desc->req.fcw_ld.FCWversion = ACC101_FCW_VER;
+		desc->req.fcw_ld.gain_i = 1;
+		desc->req.fcw_ld.gain_h = 1;
+	}
+
+	q->lb_in = rte_zmalloc_socket(dev->device->driver->name,
+			RTE_CACHE_LINE_SIZE,
+			RTE_CACHE_LINE_SIZE, conf->socket);
+	if (q->lb_in == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate lb_in memory");
+		rte_free(q);
+		return -ENOMEM;
+	}
+	q->lb_in_addr_iova = rte_malloc_virt2iova(q->lb_in);
+	q->lb_out = rte_zmalloc_socket(dev->device->driver->name,
+			RTE_CACHE_LINE_SIZE,
+			RTE_CACHE_LINE_SIZE, conf->socket);
+	if (q->lb_out == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate lb_out memory");
+		rte_free(q->lb_in);
+		rte_free(q);
+		return -ENOMEM;
+	}
+	q->derm_buffer = rte_zmalloc_socket(dev->device->driver->name,
+			RTE_BBDEV_TURBO_MAX_CB_SIZE * 10,
+			RTE_CACHE_LINE_SIZE, conf->socket);
+	if (q->derm_buffer == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate derm_buffer memory");
+		rte_free(q->lb_in);
+		rte_free(q->lb_out);
+		rte_free(q);
+		return -ENOMEM;
+	}
+	q->lb_out_addr_iova = rte_malloc_virt2iova(q->lb_out);
+	q->companion_ring_addr = rte_zmalloc_socket(dev->device->driver->name,
+			d->sw_ring_max_depth * sizeof(*q->companion_ring_addr),
+			RTE_CACHE_LINE_SIZE, conf->socket);
+	if (q->companion_ring_addr == NULL) {
+		rte_bbdev_log(ERR, "Failed to allocate companion_ring memory");
+		rte_free(q->derm_buffer);
+		rte_free(q->lb_in);
+		rte_free(q->lb_out);
+		rte_free(q);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Software queue ring wraps synchronously with the HW when it reaches
+	 * the boundary of the maximum allocated queue size, no matter what the
+	 * sw queue size is. This wrapping is guarded by setting the wrap_mask
+	 * to represent the maximum queue size as allocated at the time when
+	 * the device has been setup (in configure()).
+	 *
+	 * The queue depth is set to the queue size value (conf->queue_size).
+	 * This limits the occupancy of the queue at any point of time, so that
+	 * the queue does not get swamped with enqueue requests.
+	 */
+	q->sw_ring_depth = conf->queue_size;
+	q->sw_ring_wrap_mask = d->sw_ring_max_depth - 1;
+
+	q->op_type = conf->op_type;
+
+	q_idx = acc101_find_free_queue_idx(dev, conf);
+	if (q_idx == -1) {
+		rte_free(q->companion_ring_addr);
+		rte_free(q->derm_buffer);
+		rte_free(q->lb_in);
+		rte_free(q->lb_out);
+		rte_free(q);
+		return -1;
+	}
+
+	q->qgrp_id = (q_idx >> ACC101_GRP_ID_SHIFT) & 0xF;
+	q->vf_id = (q_idx >> ACC101_VF_ID_SHIFT)  & 0x3F;
+	q->aq_id = q_idx & 0xF;
+	q->aq_depth = 0;
+	if (conf->op_type ==  RTE_BBDEV_OP_TURBO_DEC)
+		q->aq_depth = (1 << d->acc101_conf.q_ul_4g.aq_depth_log2);
+	else if (conf->op_type ==  RTE_BBDEV_OP_TURBO_ENC)
+		q->aq_depth = (1 << d->acc101_conf.q_dl_4g.aq_depth_log2);
+	else if (conf->op_type ==  RTE_BBDEV_OP_LDPC_DEC)
+		q->aq_depth = (1 << d->acc101_conf.q_ul_5g.aq_depth_log2);
+	else if (conf->op_type ==  RTE_BBDEV_OP_LDPC_ENC)
+		q->aq_depth = (1 << d->acc101_conf.q_dl_5g.aq_depth_log2);
+
+	q->mmio_reg_enqueue = RTE_PTR_ADD(d->mmio_base,
+			queue_offset(d->pf_device,
+					q->vf_id, q->qgrp_id, q->aq_id));
+
+	rte_bbdev_log_debug(
+			"Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p",
+			dev->data->dev_id, queue_id, q->qgrp_id, q->vf_id,
+			q->aq_id, q->aq_depth, q->mmio_reg_enqueue);
+
+	dev->data->queues[queue_id].queue_private = q;
+
+	return 0;
+}
+
+static inline void
+acc101_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
+		uint16_t index)
+{
+	if (op == NULL)
+		return;
+	if (op_type == RTE_BBDEV_OP_LDPC_DEC)
+		rte_bbdev_log(INFO,
+			"  Op 5GUL %d %d %d %d %d %d %d %d %d %d %d %d",
+			index,
+			op->ldpc_dec.basegraph, op->ldpc_dec.z_c,
+			op->ldpc_dec.n_cb, op->ldpc_dec.q_m,
+			op->ldpc_dec.n_filler, op->ldpc_dec.cb_params.e,
+			op->ldpc_dec.op_flags, op->ldpc_dec.rv_index,
+			op->ldpc_dec.iter_max, op->ldpc_dec.iter_count,
+			op->ldpc_dec.harq_combined_input.length
+			);
+	else if (op_type == RTE_BBDEV_OP_LDPC_ENC) {
+		struct rte_bbdev_enc_op *op_dl = (struct rte_bbdev_enc_op *) op;
+		rte_bbdev_log(INFO,
+			"  Op 5GDL %d %d %d %d %d %d %d %d %d",
+			index,
+			op_dl->ldpc_enc.basegraph, op_dl->ldpc_enc.z_c,
+			op_dl->ldpc_enc.n_cb, op_dl->ldpc_enc.q_m,
+			op_dl->ldpc_enc.n_filler, op_dl->ldpc_enc.cb_params.e,
+			op_dl->ldpc_enc.op_flags, op_dl->ldpc_enc.rv_index
+			);
+	}
+}
+
+static int
+acc101_queue_stop(struct rte_bbdev *dev, uint16_t queue_id)
+{
+	struct acc101_queue *q;
+	struct rte_bbdev_dec_op *op;
+	uint16_t i;
+	q = dev->data->queues[queue_id].queue_private;
+	rte_bbdev_log(INFO, "Queue Stop %d H/T/D %d %d %x OpType %d",
+			queue_id, q->sw_ring_head, q->sw_ring_tail,
+			q->sw_ring_depth, q->op_type);
+	for (i = 0; i < q->sw_ring_depth; ++i) {
+		op = (q->ring_addr + i)->req.op_addr;
+		acc101_print_op(op, q->op_type, i);
+	}
+	/* ignore all operations in flight and clear counters */
+	q->sw_ring_tail = q->sw_ring_head;
+	q->aq_enqueued = 0;
+	q->aq_dequeued = 0;
+	dev->data->queues[queue_id].queue_stats.enqueued_count = 0;
+	dev->data->queues[queue_id].queue_stats.dequeued_count = 0;
+	dev->data->queues[queue_id].queue_stats.enqueue_err_count = 0;
+	dev->data->queues[queue_id].queue_stats.dequeue_err_count = 0;
+	return 0;
+}
+
+/* Release ACC101 queue */
+static int
+acc101_queue_release(struct rte_bbdev *dev, uint16_t q_id)
+{
+	struct acc101_device *d = dev->data->dev_private;
+	struct acc101_queue *q = dev->data->queues[q_id].queue_private;
+
+	if (q != NULL) {
+		/* Mark the Queue as un-assigned */
+		d->q_assigned_bit_map[q->qgrp_id] &= (0xFFFFFFFF -
+				(1 << q->aq_id));
+		rte_free(q->companion_ring_addr);
+		rte_free(q->derm_buffer);
+		rte_free(q->lb_in);
+		rte_free(q->lb_out);
+		rte_free(q);
+		dev->data->queues[q_id].queue_private = NULL;
+	}
+
 	return 0;
 }
 
@@ -263,8 +800,12 @@
 }
 
 static const struct rte_bbdev_ops acc101_bbdev_ops = {
+	.setup_queues = acc101_setup_queues,
 	.close = acc101_dev_close,
 	.info_get = acc101_dev_info_get,
+	.queue_setup = acc101_queue_setup,
+	.queue_release = acc101_queue_release,
+	.queue_stop = acc101_queue_stop,
 };
 
 /* ACC101 PCI PF address map */
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
index 9c0e711..65cab8a 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.h
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -54,6 +54,10 @@
 /* Values used in writing to the registers */
 #define ACC101_REG_IRQ_EN_ALL          0x1FF83FF  /* Enable all interrupts */
 
+/* ACC101 Specific Dimensioning */
+#define ACC101_SIZE_64MBYTE            (64*1024*1024)
+/* Number of elements in an Info Ring */
+#define ACC101_INFO_RING_NUM_ENTRIES   1024
 /* Number of elements in HARQ layout memory
  * 128M x 32kB = 4GB addressable memory
  */
@@ -88,6 +92,16 @@
 #define ACC101_DMA_MAX_NUM_POINTERS_IN    7
 #define ACC101_DMA_DESC_PADDING           8
 #define ACC101_FCW_PADDING                12
+#define ACC101_DESC_FCW_OFFSET            192
+#define ACC101_DESC_SIZE                  256
+#define ACC101_DESC_OFFSET                (ACC101_DESC_SIZE / 64)
+#define ACC101_FCW_TE_BLEN                32
+#define ACC101_FCW_TD_BLEN                24
+#define ACC101_FCW_LE_BLEN                32
+#define ACC101_FCW_LD_BLEN                36
+#define ACC101_5GUL_SIZE_0                16
+#define ACC101_5GUL_SIZE_1                40
+#define ACC101_5GUL_OFFSET_0              36
 #define ACC101_COMPANION_PTRS             8
 
 #define ACC101_FCW_VER         2
@@ -479,6 +493,38 @@ struct acc101_registry_addr {
 	.pmon_ctrl_b = HWVfPmBCntrlRegVf,
 };
 
+/* Structure associated with each queue. */
+struct __rte_cache_aligned acc101_queue {
+	union acc101_dma_desc *ring_addr;  /* Virtual address of sw ring */
+	rte_iova_t ring_addr_iova;  /* IOVA address of software ring */
+	uint32_t sw_ring_head;  /* software ring head */
+	uint32_t sw_ring_tail;  /* software ring tail */
+	/* software ring size (descriptors, not bytes) */
+	uint32_t sw_ring_depth;
+	/* mask used to wrap enqueued descriptors on the sw ring */
+	uint32_t sw_ring_wrap_mask;
+	/* Virtual address of companion ring */
+	struct acc101_ptrs *companion_ring_addr;
+	/* MMIO register used to enqueue descriptors */
+	void *mmio_reg_enqueue;
+	uint8_t vf_id;  /* VF ID (max = 63) */
+	uint8_t qgrp_id;  /* Queue Group ID */
+	uint16_t aq_id;  /* Atomic Queue ID */
+	uint16_t aq_depth;  /* Depth of atomic queue */
+	uint32_t aq_enqueued;  /* Count how many "batches" have been enqueued */
+	uint32_t aq_dequeued;  /* Count how many "batches" have been dequeued */
+	uint32_t irq_enable;  /* Enable ops dequeue interrupts if set to 1 */
+	struct rte_mempool *fcw_mempool;  /* FCW mempool */
+	enum rte_bbdev_op_type op_type;  /* Type of this Queue: TE or TD */
+	/* Internal Buffers for loopback input */
+	uint8_t *lb_in;
+	uint8_t *lb_out;
+	rte_iova_t lb_in_addr_iova;
+	rte_iova_t lb_out_addr_iova;
+	int8_t *derm_buffer; /* interim buffer for de-rm in SDK */
+	struct acc101_device *d;
+};
+
 /* Private data structure for each ACC101 device */
 struct acc101_device {
 	void *mmio_base;  /**< Base address of MMIO registers (BAR0) */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 5/9] baseband/acc101: add LDPC processing
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (3 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 4/9] baseband/acc101: add queue configuration Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 6/9] baseband/acc101: support HARQ loopback Nicolas Chautru
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Add capabilities and function to do actual
LDPC encode and decode processing operations.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 doc/guides/bbdevs/acc101.rst             |   22 +
 doc/guides/bbdevs/features/acc101.ini    |    8 +-
 drivers/baseband/acc101/rte_acc101_pmd.c | 1737 ++++++++++++++++++++++++++++++
 drivers/baseband/acc101/rte_acc101_pmd.h |   26 +
 4 files changed, 1789 insertions(+), 4 deletions(-)

diff --git a/doc/guides/bbdevs/acc101.rst b/doc/guides/bbdevs/acc101.rst
index fb5f9f1..ae27bf3 100644
--- a/doc/guides/bbdevs/acc101.rst
+++ b/doc/guides/bbdevs/acc101.rst
@@ -15,12 +15,34 @@ Features
 
 ACC101 5G/4G FEC PMD supports the following features:
 
+- LDPC Encode in the DL (5GNR)
+- LDPC Decode in the UL (5GNR)
 - 16 VFs per PF (physical device)
 - Maximum of 128 queues per VF
 - PCIe Gen-3 x16 Interface
 - MSI
 - SR-IOV
 
+ACC101 5G/4G FEC PMD supports the following BBDEV capabilities:
+
+* For the LDPC encode operation:
+   - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH`` :  set to attach CRC24B to CB(s)
+   - ``RTE_BBDEV_LDPC_RATE_MATCH`` :  if set then do not do Rate Match bypass
+   - ``RTE_BBDEV_LDPC_INTERLEAVER_BYPASS`` : if set then bypass interleaver
+
+* For the LDPC decode operation:
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK`` :  check CRC24B from CB(s)
+   - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE`` :  disable early termination
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP`` :  drops CRC24B bits appended while decoding
+   - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE`` :  provides an input for HARQ combining
+   - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` :  provides an input for HARQ combining
+   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` :  HARQ memory input is internal
+   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` :  HARQ memory output is internal
+   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` :  HARQ memory includes the fillers bits
+   - ``RTE_BBDEV_LDPC_DEC_SCATTER_GATHER`` :  supports scatter-gather for input/output data
+   - ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION`` :  supports compression of the HARQ input/output
+   - ``RTE_BBDEV_LDPC_LLR_COMPRESSION`` :  supports LLR input compression
+
 Installation
 ------------
 
diff --git a/doc/guides/bbdevs/features/acc101.ini b/doc/guides/bbdevs/features/acc101.ini
index 1a62d13..4a88932 100644
--- a/doc/guides/bbdevs/features/acc101.ini
+++ b/doc/guides/bbdevs/features/acc101.ini
@@ -6,8 +6,8 @@
 [Features]
 Turbo Decoder (4G)     = N
 Turbo Encoder (4G)     = N
-LDPC Decoder (5G)      = N
-LDPC Encoder (5G)      = N
-LLR/HARQ Compression   = N
-External DDR Access    = N
+LDPC Decoder (5G)      = Y
+LDPC Encoder (5G)      = Y
+LLR/HARQ Compression   = Y
+External DDR Access    = Y
 HW Accelerated         = Y
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index e490e97..2861907 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -762,6 +762,46 @@
 {
 	struct acc101_device *d = dev->data->dev_private;
 	static const struct rte_bbdev_op_cap bbdev_capabilities[] = {
+		{
+			.type   = RTE_BBDEV_OP_LDPC_ENC,
+			.cap.ldpc_enc = {
+				.capability_flags =
+					RTE_BBDEV_LDPC_RATE_MATCH |
+					RTE_BBDEV_LDPC_CRC_24B_ATTACH |
+					RTE_BBDEV_LDPC_INTERLEAVER_BYPASS,
+				.num_buffers_src =
+						RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
+				.num_buffers_dst =
+						RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
+			}
+		},
+		{
+			.type   = RTE_BBDEV_OP_LDPC_DEC,
+			.cap.ldpc_dec = {
+			.capability_flags =
+				RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK |
+				RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
+				RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE |
+				RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE |
+#ifdef ACC101_EXT_MEM
+				RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE |
+				RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE |
+#endif
+				RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE |
+				RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
+				RTE_BBDEV_LDPC_DECODE_BYPASS |
+				RTE_BBDEV_LDPC_DEC_SCATTER_GATHER |
+				RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION |
+				RTE_BBDEV_LDPC_LLR_COMPRESSION,
+			.llr_size = 8,
+			.llr_decimals = 1,
+			.num_buffers_src =
+					RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
+			.num_buffers_hard_out =
+					RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
+			.num_buffers_soft_out = 0,
+			}
+		},
 		RTE_BBDEV_END_OF_CAPABILITIES_LIST()
 	};
 
@@ -796,7 +836,11 @@
 	dev_info->cpu_flag_reqs = NULL;
 	dev_info->min_alignment = 64;
 	dev_info->capabilities = bbdev_capabilities;
+#ifdef ACC101_EXT_MEM
+	dev_info->harq_buffer_size = d->ddr_size;
+#else
 	dev_info->harq_buffer_size = 0;
+#endif
 }
 
 static const struct rte_bbdev_ops acc101_bbdev_ops = {
@@ -824,6 +868,1695 @@
 	{.device_id = 0},
 };
 
+/* Read flag value 0/1 from bitmap */
+static inline bool
+check_bit(uint32_t bitmap, uint32_t bitmask)
+{
+	return bitmap & bitmask;
+}
+
+static inline char *
+mbuf_append(struct rte_mbuf *m_head, struct rte_mbuf *m, uint16_t len)
+{
+	if (unlikely(len > rte_pktmbuf_tailroom(m)))
+		return NULL;
+
+	char *tail = (char *)m->buf_addr + m->data_off + m->data_len;
+	m->data_len = (uint16_t)(m->data_len + len);
+	m_head->pkt_len  = (m_head->pkt_len + len);
+	return tail;
+}
+
+/* Compute value of k0.
+ * Based on 3GPP 38.212 Table 5.4.2.1-2
+ * Starting position of different redundancy versions, k0
+ */
+static inline uint16_t
+get_k0(uint16_t n_cb, uint16_t z_c, uint8_t bg, uint8_t rv_index)
+{
+	if (rv_index == 0)
+		return 0;
+	uint16_t n = (bg == 1 ? ACC101_N_ZC_1 : ACC101_N_ZC_2) * z_c;
+	if (n_cb == n) {
+		if (rv_index == 1)
+			return (bg == 1 ? ACC101_K0_1_1 : ACC101_K0_1_2) * z_c;
+		else if (rv_index == 2)
+			return (bg == 1 ? ACC101_K0_2_1 : ACC101_K0_2_2) * z_c;
+		else
+			return (bg == 1 ? ACC101_K0_3_1 : ACC101_K0_3_2) * z_c;
+	}
+	/* LBRM case - includes a division by N */
+	if (unlikely(z_c == 0))
+		return 0;
+	if (rv_index == 1)
+		return (((bg == 1 ? ACC101_K0_1_1 : ACC101_K0_1_2) * n_cb)
+				/ n) * z_c;
+	else if (rv_index == 2)
+		return (((bg == 1 ? ACC101_K0_2_1 : ACC101_K0_2_2) * n_cb)
+				/ n) * z_c;
+	else
+		return (((bg == 1 ? ACC101_K0_3_1 : ACC101_K0_3_2) * n_cb)
+				/ n) * z_c;
+}
+
+/* Fill in a frame control word for LDPC encoding. */
+static inline void
+acc101_fcw_le_fill(const struct rte_bbdev_enc_op *op,
+		struct acc101_fcw_le *fcw, int num_cb, uint32_t default_e)
+{
+	fcw->qm = op->ldpc_enc.q_m;
+	fcw->nfiller = op->ldpc_enc.n_filler;
+	fcw->BG = (op->ldpc_enc.basegraph - 1);
+	fcw->Zc = op->ldpc_enc.z_c;
+	fcw->ncb = op->ldpc_enc.n_cb;
+	fcw->k0 = get_k0(fcw->ncb, fcw->Zc, op->ldpc_enc.basegraph,
+			op->ldpc_enc.rv_index);
+	fcw->rm_e = (default_e == 0) ? op->ldpc_enc.cb_params.e : default_e;
+	fcw->crc_select = check_bit(op->ldpc_enc.op_flags,
+			RTE_BBDEV_LDPC_CRC_24B_ATTACH);
+	fcw->bypass_intlv = check_bit(op->ldpc_enc.op_flags,
+			RTE_BBDEV_LDPC_INTERLEAVER_BYPASS);
+	fcw->mcb_count = num_cb;
+}
+
+/* Convert offset to harq index for harq_layout structure */
+static inline uint32_t hq_index(uint32_t offset)
+{
+	return (offset >> ACC101_HARQ_OFFSET_SHIFT) & ACC101_HARQ_OFFSET_MASK;
+}
+
+/* Fill in a frame control word for LDPC decoding. */
+static inline void
+acc101_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc101_fcw_ld *fcw,
+		union acc101_harq_layout_data *harq_layout)
+{
+	uint16_t harq_out_length, harq_in_length, ncb_p, k0_p, parity_offset;
+	uint32_t harq_index;
+	uint32_t l;
+
+	fcw->qm = op->ldpc_dec.q_m;
+	fcw->nfiller = op->ldpc_dec.n_filler;
+	fcw->BG = (op->ldpc_dec.basegraph - 1);
+	fcw->Zc = op->ldpc_dec.z_c;
+	fcw->ncb = op->ldpc_dec.n_cb;
+	fcw->k0 = get_k0(fcw->ncb, fcw->Zc, op->ldpc_dec.basegraph,
+			op->ldpc_dec.rv_index);
+	if (op->ldpc_dec.code_block_mode == RTE_BBDEV_CODE_BLOCK)
+		fcw->rm_e = op->ldpc_dec.cb_params.e;
+	else
+		fcw->rm_e = (op->ldpc_dec.tb_params.r <
+				op->ldpc_dec.tb_params.cab) ?
+						op->ldpc_dec.tb_params.ea :
+						op->ldpc_dec.tb_params.eb;
+
+	if (unlikely(check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE) &&
+			(op->ldpc_dec.harq_combined_input.length == 0))) {
+		rte_bbdev_log(WARNING, "Null HARQ input size provided");
+		/* Disable HARQ input in that case to carry forward */
+		op->ldpc_dec.op_flags ^= RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE;
+	}
+
+	fcw->hcin_en = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE);
+	fcw->hcout_en = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE);
+	fcw->crc_select = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK);
+	fcw->bypass_dec = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_DECODE_BYPASS);
+	fcw->bypass_intlv = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS);
+	if (op->ldpc_dec.q_m == 1) {
+		fcw->bypass_intlv = 1;
+		fcw->qm = 2;
+	}
+	fcw->hcin_decomp_mode = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION);
+	fcw->hcout_comp_mode = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION);
+	fcw->llr_pack_mode = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_LLR_COMPRESSION);
+	harq_index = hq_index(op->ldpc_dec.harq_combined_output.offset);
+	if (fcw->hcin_en > 0) {
+		harq_in_length = op->ldpc_dec.harq_combined_input.length;
+		if (fcw->hcin_decomp_mode > 0)
+			harq_in_length = harq_in_length * 8 / 6;
+		harq_in_length = RTE_MIN(harq_in_length, op->ldpc_dec.n_cb
+				- op->ldpc_dec.n_filler);
+		/* Alignment on next 64B - Already enforced from HC output */
+		harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, 64);
+		fcw->hcin_size0 = harq_in_length;
+		fcw->hcin_offset = 0;
+		fcw->hcin_size1 = 0;
+	} else {
+		fcw->hcin_size0 = 0;
+		fcw->hcin_offset = 0;
+		fcw->hcin_size1 = 0;
+	}
+
+	fcw->itmax = op->ldpc_dec.iter_max;
+	fcw->itstop = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE);
+	fcw->synd_precoder = fcw->itstop;
+	/*
+	 * These are all implicitly set
+	 * fcw->synd_post = 0;
+	 * fcw->so_en = 0;
+	 * fcw->so_bypass_rm = 0;
+	 * fcw->so_bypass_intlv = 0;
+	 * fcw->dec_convllr = 0;
+	 * fcw->hcout_convllr = 0;
+	 * fcw->hcout_size1 = 0;
+	 * fcw->so_it = 0;
+	 * fcw->hcout_offset = 0;
+	 * fcw->negstop_th = 0;
+	 * fcw->negstop_it = 0;
+	 * fcw->negstop_en = 0;
+	 * fcw->gain_i = 1;
+	 * fcw->gain_h = 1;
+	 */
+	if (fcw->hcout_en > 0) {
+		parity_offset = (op->ldpc_dec.basegraph == 1 ? 20 : 8)
+			* op->ldpc_dec.z_c - op->ldpc_dec.n_filler;
+		k0_p = (fcw->k0 > parity_offset) ?
+				fcw->k0 - op->ldpc_dec.n_filler : fcw->k0;
+		ncb_p = fcw->ncb - op->ldpc_dec.n_filler;
+		l = RTE_MIN(k0_p + fcw->rm_e, INT16_MAX);
+		harq_out_length = (uint16_t) fcw->hcin_size0;
+		harq_out_length = RTE_MAX(harq_out_length, l);
+		/* Cannot exceed the pruned Ncb circular buffer */
+		harq_out_length = RTE_MIN(harq_out_length, ncb_p);
+		/* Alignment on next 64B */
+		harq_out_length = RTE_ALIGN_CEIL(harq_out_length, 64);
+		fcw->hcout_size0 = harq_out_length;
+		fcw->hcout_size1 = 0;
+		fcw->hcout_offset = 0;
+		harq_layout[harq_index].offset = fcw->hcout_offset;
+		harq_layout[harq_index].size0 = fcw->hcout_size0;
+	} else {
+		fcw->hcout_size0 = 0;
+		fcw->hcout_size1 = 0;
+		fcw->hcout_offset = 0;
+	}
+}
+
+/**
+ * Fills descriptor with data pointers of one block type.
+ *
+ * @param desc
+ *   Pointer to DMA descriptor.
+ * @param input
+ *   Pointer to pointer to input data which will be encoded. It can be changed
+ *   and points to next segment in scatter-gather case.
+ * @param offset
+ *   Input offset in rte_mbuf structure. It is used for calculating the point
+ *   where data is starting.
+ * @param cb_len
+ *   Length of currently processed Code Block
+ * @param seg_total_left
+ *   It indicates how many bytes still left in segment (mbuf) for further
+ *   processing.
+ * @param op_flags
+ *   Store information about device capabilities
+ * @param next_triplet
+ *   Index for ACC101 DMA Descriptor triplet
+ * @param scattergather
+ *   Flag to support scatter-gather for the mbuf
+ *
+ * @return
+ *   Returns index of next triplet on success, other value if lengths of
+ *   pkt and processed cb do not match.
+ *
+ */
+static inline int
+acc101_dma_fill_blk_type_in(struct acc101_dma_req_desc *desc,
+		struct rte_mbuf **input, uint32_t *offset, uint32_t cb_len,
+		uint32_t *seg_total_left, int next_triplet,
+		bool scattergather)
+{
+	uint32_t part_len;
+	struct rte_mbuf *m = *input;
+
+	if (scattergather)
+		part_len = (*seg_total_left < cb_len) ?
+				*seg_total_left : cb_len;
+	else
+		part_len = cb_len;
+	cb_len -= part_len;
+	*seg_total_left -= part_len;
+
+	desc->data_ptrs[next_triplet].address =
+			rte_pktmbuf_iova_offset(m, *offset);
+	desc->data_ptrs[next_triplet].blen = part_len;
+	desc->data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_IN;
+	desc->data_ptrs[next_triplet].last = 0;
+	desc->data_ptrs[next_triplet].dma_ext = 0;
+	*offset += part_len;
+	next_triplet++;
+
+	while (cb_len > 0) {
+		if (next_triplet < ACC101_DMA_MAX_NUM_POINTERS_IN && m->next != NULL) {
+
+			m = m->next;
+			*seg_total_left = rte_pktmbuf_data_len(m);
+			part_len = (*seg_total_left < cb_len) ?
+					*seg_total_left :
+					cb_len;
+			desc->data_ptrs[next_triplet].address =
+					rte_pktmbuf_iova_offset(m, 0);
+			desc->data_ptrs[next_triplet].blen = part_len;
+			desc->data_ptrs[next_triplet].blkid =
+					ACC101_DMA_BLKID_IN;
+			desc->data_ptrs[next_triplet].last = 0;
+			desc->data_ptrs[next_triplet].dma_ext = 0;
+			cb_len -= part_len;
+			*seg_total_left -= part_len;
+			/* Initializing offset for next segment (mbuf) */
+			*offset = part_len;
+			next_triplet++;
+		} else {
+			rte_bbdev_log(ERR,
+				"Some data still left for processing: "
+				"data_left: %u, next_triplet: %u, next_mbuf: %p",
+				cb_len, next_triplet, m->next);
+			return -EINVAL;
+		}
+	}
+	/* Storing new mbuf as it could be changed in scatter-gather case*/
+	*input = m;
+
+	return next_triplet;
+}
+
+/* Fills descriptor with data pointers of one block type.
+ * Returns index of next triplet on success, other value if lengths of
+ * output data and processed mbuf do not match.
+ */
+static inline int
+acc101_dma_fill_blk_type_out(struct acc101_dma_req_desc *desc,
+		struct rte_mbuf *output, uint32_t out_offset,
+		uint32_t output_len, int next_triplet, int blk_id)
+{
+	desc->data_ptrs[next_triplet].address =
+			rte_pktmbuf_iova_offset(output, out_offset);
+	desc->data_ptrs[next_triplet].blen = output_len;
+	desc->data_ptrs[next_triplet].blkid = blk_id;
+	desc->data_ptrs[next_triplet].last = 0;
+	desc->data_ptrs[next_triplet].dma_ext = 0;
+	next_triplet++;
+
+	return next_triplet;
+}
+
+static inline void
+acc101_header_init(struct acc101_dma_req_desc *desc)
+{
+	desc->word0 = ACC101_DMA_DESC_TYPE;
+	desc->word1 = 0; /**< Timestamp could be disabled */
+	desc->word2 = 0;
+	desc->word3 = 0;
+	desc->numCBs = 1;
+}
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+/* Check if any input data is unexpectedly left for processing */
+static inline int
+check_mbuf_total_left(uint32_t mbuf_total_left)
+{
+	if (mbuf_total_left == 0)
+		return 0;
+	rte_bbdev_log(ERR,
+		"Some date still left for processing: mbuf_total_left = %u",
+		mbuf_total_left);
+	return -EINVAL;
+}
+#endif
+
+static inline int
+acc101_dma_desc_le_fill(struct rte_bbdev_enc_op *op,
+		struct acc101_dma_req_desc *desc, struct rte_mbuf **input,
+		struct rte_mbuf *output, uint32_t *in_offset,
+		uint32_t *out_offset, uint32_t *out_length,
+		uint32_t *mbuf_total_left, uint32_t *seg_total_left)
+{
+	int next_triplet = 1; /* FCW already done */
+	uint16_t K, in_length_in_bits, in_length_in_bytes;
+	struct rte_bbdev_op_ldpc_enc *enc = &op->ldpc_enc;
+
+	acc101_header_init(desc);
+
+	K = (enc->basegraph == 1 ? 22 : 10) * enc->z_c;
+	in_length_in_bits = K - enc->n_filler;
+	if (enc->op_flags & RTE_BBDEV_LDPC_CRC_24B_ATTACH)
+		in_length_in_bits -= 24;
+	in_length_in_bytes = in_length_in_bits >> 3;
+
+	if (unlikely((*mbuf_total_left == 0) ||
+			(*mbuf_total_left < in_length_in_bytes))) {
+		rte_bbdev_log(ERR,
+				"Mismatch between mbuf length and included CB sizes: mbuf len %u, cb len %u",
+				*mbuf_total_left, in_length_in_bytes);
+		return -1;
+	}
+
+	next_triplet = acc101_dma_fill_blk_type_in(desc, input, in_offset,
+			in_length_in_bytes,
+			seg_total_left, next_triplet,
+			false);
+	if (unlikely(next_triplet < 0)) {
+		rte_bbdev_log(ERR,
+				"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+				op);
+		return -1;
+	}
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->m2dlen = next_triplet;
+	*mbuf_total_left -= in_length_in_bytes;
+
+	/* Set output length */
+	/* Integer round up division by 8 */
+	*out_length = (enc->cb_params.e + 7) >> 3;
+
+	next_triplet = acc101_dma_fill_blk_type_out(desc, output, *out_offset,
+			*out_length, next_triplet, ACC101_DMA_BLKID_OUT_ENC);
+	op->ldpc_enc.output.length += *out_length;
+	*out_offset += *out_length;
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->data_ptrs[next_triplet - 1].dma_ext = 0;
+	desc->d2mlen = next_triplet - desc->m2dlen;
+
+	desc->op_addr = op;
+
+	return 0;
+}
+
+static inline int
+acc101_dma_desc_ld_fill(struct rte_bbdev_dec_op *op,
+		struct acc101_dma_req_desc *desc,
+		struct rte_mbuf **input, struct rte_mbuf *h_output,
+		uint32_t *in_offset, uint32_t *h_out_offset,
+		uint32_t *h_out_length, uint32_t *mbuf_total_left,
+		uint32_t *seg_total_left,
+		struct acc101_fcw_ld *fcw)
+{
+	struct rte_bbdev_op_ldpc_dec *dec = &op->ldpc_dec;
+	int next_triplet = 1; /* FCW already done */
+	uint32_t input_length;
+	uint16_t output_length, crc24_overlap = 0;
+	uint16_t sys_cols, K, h_p_size, h_np_size;
+	bool h_comp = check_bit(dec->op_flags,
+			RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION);
+
+	acc101_header_init(desc);
+
+	if (check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP))
+		crc24_overlap = 24;
+
+	/* Compute some LDPC BG lengths */
+	input_length = fcw->rm_e;
+	if (check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_LLR_COMPRESSION))
+		input_length = (input_length * 3 + 3) / 4;
+	sys_cols = (dec->basegraph == 1) ? 22 : 10;
+	K = sys_cols * dec->z_c;
+	output_length = K - dec->n_filler - crc24_overlap;
+
+	if (unlikely((*mbuf_total_left == 0) ||
+			(*mbuf_total_left < input_length))) {
+		rte_bbdev_log(ERR,
+				"Mismatch between mbuf length and included CB sizes: mbuf len %u, cb len %u",
+				*mbuf_total_left, input_length);
+		return -1;
+	}
+
+	next_triplet = acc101_dma_fill_blk_type_in(desc, input,
+			in_offset, input_length,
+			seg_total_left, next_triplet,
+			check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_DEC_SCATTER_GATHER));
+
+	if (unlikely(next_triplet < 0)) {
+		rte_bbdev_log(ERR,
+				"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+				op);
+		return -1;
+	}
+
+	if (check_bit(op->ldpc_dec.op_flags,
+				RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE)) {
+		h_p_size = fcw->hcin_size0 + fcw->hcin_size1;
+		if (h_comp)
+			h_p_size = (h_p_size * 3 + 3) / 4;
+		desc->data_ptrs[next_triplet].address =
+				dec->harq_combined_input.offset;
+		desc->data_ptrs[next_triplet].blen = h_p_size;
+		desc->data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_IN_HARQ;
+		desc->data_ptrs[next_triplet].dma_ext = 1;
+#ifndef ACC101_EXT_MEM
+		acc101_dma_fill_blk_type_out(
+				desc,
+				op->ldpc_dec.harq_combined_input.data,
+				op->ldpc_dec.harq_combined_input.offset,
+				h_p_size,
+				next_triplet,
+				ACC101_DMA_BLKID_IN_HARQ);
+#endif
+		next_triplet++;
+	}
+
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->m2dlen = next_triplet;
+	*mbuf_total_left -= input_length;
+
+	next_triplet = acc101_dma_fill_blk_type_out(desc, h_output,
+			*h_out_offset, output_length >> 3, next_triplet,
+			ACC101_DMA_BLKID_OUT_HARD);
+
+	if (check_bit(op->ldpc_dec.op_flags,
+				RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE)) {
+		/* Pruned size of the HARQ */
+		h_p_size = fcw->hcout_size0 + fcw->hcout_size1;
+		/* Non-Pruned size of the HARQ */
+		h_np_size = fcw->hcout_offset > 0 ?
+				fcw->hcout_offset + fcw->hcout_size1 :
+				h_p_size;
+		if (h_comp) {
+			h_np_size = (h_np_size * 3 + 3) / 4;
+			h_p_size = (h_p_size * 3 + 3) / 4;
+		}
+		dec->harq_combined_output.length = h_np_size;
+		desc->data_ptrs[next_triplet].address =
+				dec->harq_combined_output.offset;
+		desc->data_ptrs[next_triplet].blen = h_p_size;
+		desc->data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_OUT_HARQ;
+		desc->data_ptrs[next_triplet].dma_ext = 1;
+#ifndef ACC101_EXT_MEM
+		acc101_dma_fill_blk_type_out(
+				desc,
+				dec->harq_combined_output.data,
+				dec->harq_combined_output.offset,
+				h_p_size,
+				next_triplet,
+				ACC101_DMA_BLKID_OUT_HARQ);
+#endif
+		next_triplet++;
+	}
+
+	*h_out_length = output_length >> 3;
+	dec->hard_output.length += *h_out_length;
+	*h_out_offset += *h_out_length;
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->d2mlen = next_triplet - desc->m2dlen;
+
+	desc->op_addr = op;
+
+	return 0;
+}
+
+static inline void
+acc101_dma_desc_ld_update(struct rte_bbdev_dec_op *op,
+		struct acc101_dma_req_desc *desc,
+		struct rte_mbuf *input, struct rte_mbuf *h_output,
+		uint32_t *in_offset, uint32_t *h_out_offset,
+		uint32_t *h_out_length,
+		union acc101_harq_layout_data *harq_layout)
+{
+	int next_triplet = 1; /* FCW already done */
+	desc->data_ptrs[next_triplet].address =
+			rte_pktmbuf_iova_offset(input, *in_offset);
+	next_triplet++;
+
+	if (check_bit(op->ldpc_dec.op_flags,
+				RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE)) {
+		struct rte_bbdev_op_data hi = op->ldpc_dec.harq_combined_input;
+		desc->data_ptrs[next_triplet].address = hi.offset;
+#ifndef ACC101_EXT_MEM
+		desc->data_ptrs[next_triplet].address =
+				rte_pktmbuf_iova_offset(hi.data, hi.offset);
+#endif
+		next_triplet++;
+	}
+
+	desc->data_ptrs[next_triplet].address =
+			rte_pktmbuf_iova_offset(h_output, *h_out_offset);
+	*h_out_length = desc->data_ptrs[next_triplet].blen;
+	next_triplet++;
+
+	if (check_bit(op->ldpc_dec.op_flags,
+				RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE)) {
+		desc->data_ptrs[next_triplet].address =
+				op->ldpc_dec.harq_combined_output.offset;
+		/* Adjust based on previous operation */
+		struct rte_bbdev_dec_op *prev_op = desc->op_addr;
+		op->ldpc_dec.harq_combined_output.length =
+				prev_op->ldpc_dec.harq_combined_output.length;
+		uint32_t harq_idx = hq_index(
+				op->ldpc_dec.harq_combined_output.offset);
+		uint32_t prev_harq_idx = hq_index(
+				prev_op->ldpc_dec.harq_combined_output.offset);
+		harq_layout[harq_idx].val = harq_layout[prev_harq_idx].val;
+#ifndef ACC101_EXT_MEM
+		struct rte_bbdev_op_data ho =
+				op->ldpc_dec.harq_combined_output;
+		desc->data_ptrs[next_triplet].address =
+				rte_pktmbuf_iova_offset(ho.data, ho.offset);
+#endif
+		next_triplet++;
+	}
+
+	op->ldpc_dec.hard_output.length += *h_out_length;
+	desc->op_addr = op;
+}
+
+
+/* Enqueue a number of operations to HW and update software rings */
+static inline void
+acc101_dma_enqueue(struct acc101_queue *q, uint16_t n,
+		struct rte_bbdev_stats *queue_stats)
+{
+	union acc101_enqueue_reg_fmt enq_req;
+#ifdef RTE_BBDEV_OFFLOAD_COST
+	uint64_t start_time = 0;
+	queue_stats->acc_offload_cycles = 0;
+#else
+	RTE_SET_USED(queue_stats);
+#endif
+
+	enq_req.val = 0;
+	/* Setting offset, 100b for 256 DMA Desc */
+	enq_req.addr_offset = ACC101_DESC_OFFSET;
+
+	/* Split ops into batches */
+	do {
+		union acc101_dma_desc *desc;
+		uint16_t enq_batch_size;
+		uint64_t offset;
+		rte_iova_t req_elem_addr;
+
+		enq_batch_size = RTE_MIN(n, MAX_ENQ_BATCH_SIZE);
+
+		/* Set flag on last descriptor in a batch */
+		desc = q->ring_addr + ((q->sw_ring_head + enq_batch_size - 1) &
+				q->sw_ring_wrap_mask);
+		desc->req.last_desc_in_batch = 1;
+
+		/* Calculate the 1st descriptor's address */
+		offset = ((q->sw_ring_head & q->sw_ring_wrap_mask) *
+				sizeof(union acc101_dma_desc));
+		req_elem_addr = q->ring_addr_iova + offset;
+
+		/* Fill enqueue struct */
+		enq_req.num_elem = enq_batch_size;
+		/* low 6 bits are not needed */
+		enq_req.req_elem_addr = (uint32_t)(req_elem_addr >> 6);
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+		rte_memdump(stderr, "Req sdone", desc, sizeof(*desc));
+#endif
+		rte_bbdev_log_debug(
+				"Enqueue %u reqs (phys %#"PRIx64") to reg %p",
+				enq_batch_size,
+				req_elem_addr,
+				(void *)q->mmio_reg_enqueue);
+
+		rte_wmb();
+
+#ifdef RTE_BBDEV_OFFLOAD_COST
+		/* Start time measurement for enqueue function offload. */
+		start_time = rte_rdtsc_precise();
+#endif
+		rte_bbdev_log(DEBUG, "Debug : MMIO Enqueue");
+		mmio_write(q->mmio_reg_enqueue, enq_req.val);
+
+#ifdef RTE_BBDEV_OFFLOAD_COST
+		queue_stats->acc_offload_cycles +=
+				rte_rdtsc_precise() - start_time;
+#endif
+
+		q->aq_enqueued++;
+		q->sw_ring_head += enq_batch_size;
+		n -= enq_batch_size;
+
+	} while (n);
+
+
+}
+
+/* Enqueue one encode operations for ACC101 device in CB mode
+ * multiplexed on the same descriptor
+ */
+static inline int
+enqueue_ldpc_enc_n_op_cb(struct acc101_queue *q, struct rte_bbdev_enc_op **ops,
+		uint16_t total_enqueued_descs, int16_t num)
+{
+	union acc101_dma_desc *desc = NULL;
+	uint32_t out_length;
+	struct rte_mbuf *output_head, *output;
+	int i, next_triplet;
+	uint16_t  in_length_in_bytes;
+	struct rte_bbdev_op_ldpc_enc *enc = &ops[0]->ldpc_enc;
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_descs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	acc101_fcw_le_fill(ops[0], &desc->req.fcw_le, num, 0);
+
+	/** This could be done at polling */
+	acc101_header_init(&desc->req);
+	desc->req.numCBs = num;
+
+	in_length_in_bytes = ops[0]->ldpc_enc.input.data->data_len;
+	out_length = (enc->cb_params.e + 7) >> 3;
+	desc->req.m2dlen = 1 + num;
+	desc->req.d2mlen = num;
+	next_triplet = 1;
+
+	for (i = 0; i < num; i++) {
+		desc->req.data_ptrs[next_triplet].address =
+			rte_pktmbuf_iova_offset(ops[i]->ldpc_enc.input.data, 0);
+		desc->req.data_ptrs[next_triplet].blen = in_length_in_bytes;
+		next_triplet++;
+		desc->req.data_ptrs[next_triplet].address =
+				rte_pktmbuf_iova_offset(
+				ops[i]->ldpc_enc.output.data, 0);
+		desc->req.data_ptrs[next_triplet].blen = out_length;
+		next_triplet++;
+		ops[i]->ldpc_enc.output.length = out_length;
+		output_head = output = ops[i]->ldpc_enc.output.data;
+		mbuf_append(output_head, output, out_length);
+		output->data_len = out_length;
+	}
+
+	desc->req.op_addr = ops[0];
+	/* Keep track of pointers even when multiplexed in single descriptor */
+	struct acc101_ptrs *context_ptrs = q->companion_ring_addr + desc_idx;
+	for (i = 0; i < num; i++)
+		context_ptrs->ptr[i].op_addr = ops[i];
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	rte_memdump(stderr, "FCW", &desc->req.fcw_le,
+			sizeof(desc->req.fcw_le) - 8);
+	rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+#endif
+
+	/* One CB (one op) was successfully prepared to enqueue */
+	return num;
+}
+
+/* Enqueue one encode operations for ACC101 device for a partial TB
+ * all codes blocks have same configuration multiplexed on the same descriptor
+ */
+static inline void
+enqueue_ldpc_enc_part_tb(struct acc101_queue *q, struct rte_bbdev_enc_op *op,
+		uint16_t total_enqueued_descs, int16_t num_cbs, uint32_t e,
+		uint16_t in_len_B, uint32_t out_len_B, uint32_t *in_offset,
+		uint32_t *out_offset)
+{
+
+	union acc101_dma_desc *desc = NULL;
+	struct rte_mbuf *output_head, *output;
+	int i, next_triplet;
+	struct rte_bbdev_op_ldpc_enc *enc = &op->ldpc_enc;
+
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_descs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	acc101_fcw_le_fill(op, &desc->req.fcw_le, num_cbs, e);
+
+	/** This could be done at polling */
+	acc101_header_init(&desc->req);
+	desc->req.numCBs = num_cbs;
+
+	desc->req.m2dlen = 1 + num_cbs;
+	desc->req.d2mlen = num_cbs;
+	next_triplet = 1;
+
+	for (i = 0; i < num_cbs; i++) {
+		desc->req.data_ptrs[next_triplet].address =
+			rte_pktmbuf_iova_offset(enc->input.data,
+					*in_offset);
+		*in_offset += in_len_B;
+		desc->req.data_ptrs[next_triplet].blen = in_len_B;
+		next_triplet++;
+		desc->req.data_ptrs[next_triplet].address =
+				rte_pktmbuf_iova_offset(
+						enc->output.data, *out_offset);
+		*out_offset += out_len_B;
+		desc->req.data_ptrs[next_triplet].blen = out_len_B;
+		next_triplet++;
+		enc->output.length += out_len_B;
+		output_head = output = enc->output.data;
+		mbuf_append(output_head, output, out_len_B);
+	}
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	rte_memdump(stderr, "FCW", &desc->req.fcw_le,
+			sizeof(desc->req.fcw_le) - 8);
+	rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+#endif
+
+}
+
+/* Enqueue one encode operations for ACC101 device in CB mode */
+static inline int
+enqueue_ldpc_enc_one_op_cb(struct acc101_queue *q, struct rte_bbdev_enc_op *op,
+		uint16_t total_enqueued_cbs)
+{
+	union acc101_dma_desc *desc = NULL;
+	int ret;
+	uint32_t in_offset, out_offset, out_length, mbuf_total_left,
+		seg_total_left;
+	struct rte_mbuf *input, *output_head, *output;
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	acc101_fcw_le_fill(op, &desc->req.fcw_le, 1, 0);
+
+	input = op->ldpc_enc.input.data;
+	output_head = output = op->ldpc_enc.output.data;
+	in_offset = op->ldpc_enc.input.offset;
+	out_offset = op->ldpc_enc.output.offset;
+	out_length = 0;
+	mbuf_total_left = op->ldpc_enc.input.length;
+	seg_total_left = rte_pktmbuf_data_len(op->ldpc_enc.input.data)
+			- in_offset;
+
+	ret = acc101_dma_desc_le_fill(op, &desc->req, &input, output,
+			&in_offset, &out_offset, &out_length, &mbuf_total_left,
+			&seg_total_left);
+
+	if (unlikely(ret < 0))
+		return ret;
+
+	mbuf_append(output_head, output, out_length);
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	rte_memdump(stderr, "FCW", &desc->req.fcw_le,
+			sizeof(desc->req.fcw_le) - 8);
+	rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+
+	if (check_mbuf_total_left(mbuf_total_left) != 0)
+		return -EINVAL;
+#endif
+	/* One CB (one op) was successfully prepared to enqueue */
+	return 1;
+}
+
+/* Enqueue one encode operations for ACC101 device in TB mode.
+ * returns the number of descs used
+ */
+static inline int
+enqueue_ldpc_enc_one_op_tb(struct acc101_queue *q, struct rte_bbdev_enc_op *op,
+		uint16_t enq_descs, uint8_t cbs_in_tb)
+{
+	uint8_t num_a, num_b;
+	uint16_t desc_idx;
+	uint8_t r = op->ldpc_enc.tb_params.r;
+	uint8_t cab =  op->ldpc_enc.tb_params.cab;
+	union acc101_dma_desc *desc;
+	uint16_t init_enq_descs = enq_descs;
+	uint16_t input_len_B = ((op->ldpc_enc.basegraph == 1 ? 22 : 10) *
+			op->ldpc_enc.z_c - op->ldpc_enc.n_filler) >> 3;
+	if (check_bit(op->ldpc_enc.op_flags, RTE_BBDEV_LDPC_CRC_24B_ATTACH))
+		input_len_B -= 3;
+
+	if (r < cab) {
+		num_a = cab - r;
+		num_b = cbs_in_tb - cab;
+	} else {
+		num_a = 0;
+		num_b = cbs_in_tb - r;
+	}
+	uint32_t in_offset = 0, out_offset = 0;
+
+	while (num_a > 0) {
+		uint32_t e = op->ldpc_enc.tb_params.ea;
+		uint32_t out_len_B = (e + 7) >> 3;
+		uint8_t enq = RTE_MIN(num_a, ACC101_MUX_5GDL_DESC);
+		num_a -= enq;
+		enqueue_ldpc_enc_part_tb(q, op, enq_descs, enq, e, input_len_B,
+				out_len_B, &in_offset, &out_offset);
+		enq_descs++;
+	}
+	while (num_b > 0) {
+		uint32_t e = op->ldpc_enc.tb_params.eb;
+		uint32_t out_len_B = (e + 7) >> 3;
+		uint8_t enq = RTE_MIN(num_b, ACC101_MUX_5GDL_DESC);
+		num_b -= enq;
+		enqueue_ldpc_enc_part_tb(q, op, enq_descs, enq, e, input_len_B,
+				out_len_B, &in_offset, &out_offset);
+		enq_descs++;
+	}
+
+	uint16_t return_descs = enq_descs - init_enq_descs;
+	/* Keep total number of CBs in first TB */
+	desc_idx = ((q->sw_ring_head + init_enq_descs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	desc->req.cbs_in_tb = return_descs; /** Actual number of descriptors */
+	desc->req.op_addr = op;
+
+	/* Set SDone on last CB descriptor for TB mode. */
+	desc_idx = ((q->sw_ring_head + enq_descs - 1)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+	desc->req.op_addr = op;
+	return return_descs;
+}
+
+/** Enqueue one decode operations for ACC101 device in CB mode */
+static inline int
+enqueue_ldpc_dec_one_op_cb(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+		uint16_t total_enqueued_cbs, bool same_op,
+		struct rte_bbdev_queue_data *q_data)
+{
+	RTE_SET_USED(q_data);
+	int ret;
+
+	union acc101_dma_desc *desc;
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	struct rte_mbuf *input, *h_output_head, *h_output;
+	uint32_t in_offset, h_out_offset, mbuf_total_left, h_out_length = 0;
+	input = op->ldpc_dec.input.data;
+	h_output_head = h_output = op->ldpc_dec.hard_output.data;
+	in_offset = op->ldpc_dec.input.offset;
+	h_out_offset = op->ldpc_dec.hard_output.offset;
+	mbuf_total_left = op->ldpc_dec.input.length;
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (unlikely(input == NULL)) {
+		rte_bbdev_log(ERR, "Invalid mbuf pointer");
+		return -EFAULT;
+	}
+#endif
+	union acc101_harq_layout_data *harq_layout = q->d->harq_layout;
+
+	if (same_op) {
+		union acc101_dma_desc *prev_desc;
+		desc_idx = ((q->sw_ring_head + total_enqueued_cbs - 1)
+				& q->sw_ring_wrap_mask);
+		prev_desc = q->ring_addr + desc_idx;
+		uint8_t *prev_ptr = (uint8_t *) prev_desc;
+		uint8_t *new_ptr = (uint8_t *) desc;
+		/* Copy first 4 words and BDESCs */
+		rte_memcpy(new_ptr, prev_ptr, ACC101_5GUL_SIZE_0);
+		rte_memcpy(new_ptr + ACC101_5GUL_OFFSET_0,
+				prev_ptr + ACC101_5GUL_OFFSET_0,
+				ACC101_5GUL_SIZE_1);
+		desc->req.op_addr = prev_desc->req.op_addr;
+		/* Copy FCW */
+		rte_memcpy(new_ptr + ACC101_DESC_FCW_OFFSET,
+				prev_ptr + ACC101_DESC_FCW_OFFSET,
+				ACC101_FCW_LD_BLEN);
+		acc101_dma_desc_ld_update(op, &desc->req, input, h_output,
+				&in_offset, &h_out_offset,
+				&h_out_length, harq_layout);
+	} else {
+		struct acc101_fcw_ld *fcw;
+		uint32_t seg_total_left;
+		fcw = &desc->req.fcw_ld;
+		acc101_fcw_ld_fill(op, fcw, harq_layout);
+
+		/* Special handling when using mbuf or not */
+		if (check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_DEC_SCATTER_GATHER))
+			seg_total_left = rte_pktmbuf_data_len(input)
+					- in_offset;
+		else
+			seg_total_left = fcw->rm_e;
+
+		ret = acc101_dma_desc_ld_fill(op, &desc->req, &input, h_output,
+				&in_offset, &h_out_offset,
+				&h_out_length, &mbuf_total_left,
+				&seg_total_left, fcw);
+		if (unlikely(ret < 0))
+			return ret;
+	}
+
+	/* Hard output */
+	mbuf_append(h_output_head, h_output, h_out_length);
+#ifndef ACC101_EXT_MEM
+	if (op->ldpc_dec.harq_combined_output.length > 0) {
+		/* Push the HARQ output into host memory */
+		struct rte_mbuf *hq_output_head, *hq_output;
+		hq_output_head = op->ldpc_dec.harq_combined_output.data;
+		hq_output = op->ldpc_dec.harq_combined_output.data;
+		mbuf_append(hq_output_head, hq_output,
+				op->ldpc_dec.harq_combined_output.length);
+	}
+#endif
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	rte_memdump(stderr, "FCW", &desc->req.fcw_ld,
+			sizeof(desc->req.fcw_ld));
+	rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+#endif
+
+	/* One CB (one op) was successfully prepared to enqueue */
+	return 1;
+}
+
+
+/* Enqueue one decode operations for ACC101 device in TB mode */
+static inline int
+enqueue_ldpc_dec_one_op_tb(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+		uint16_t total_enqueued_cbs, uint8_t cbs_in_tb)
+{
+	union acc101_dma_desc *desc = NULL;
+	union acc101_dma_desc *desc_first = NULL;
+	int ret;
+	uint8_t r, c;
+	uint32_t in_offset, h_out_offset,
+		h_out_length, mbuf_total_left, seg_total_left;
+	struct rte_mbuf *input, *h_output_head, *h_output;
+	uint16_t current_enqueued_cbs = 0;
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	desc_first = desc;
+	uint64_t fcw_offset = (desc_idx << 8) + ACC101_DESC_FCW_OFFSET;
+	union acc101_harq_layout_data *harq_layout = q->d->harq_layout;
+	acc101_fcw_ld_fill(op, &desc->req.fcw_ld, harq_layout);
+
+	input = op->ldpc_dec.input.data;
+	h_output_head = h_output = op->ldpc_dec.hard_output.data;
+	in_offset = op->ldpc_dec.input.offset;
+	h_out_offset = op->ldpc_dec.hard_output.offset;
+	h_out_length = 0;
+	mbuf_total_left = op->ldpc_dec.input.length;
+	c = op->ldpc_dec.tb_params.c;
+	r = op->ldpc_dec.tb_params.r;
+
+	while (mbuf_total_left > 0 && r < c) {
+		if (check_bit(op->ldpc_dec.op_flags,
+				RTE_BBDEV_LDPC_DEC_SCATTER_GATHER))
+			seg_total_left = rte_pktmbuf_data_len(input)
+					- in_offset;
+		else
+			seg_total_left = op->ldpc_dec.input.length;
+		/* Set up DMA descriptor */
+		desc = q->ring_addr + ((q->sw_ring_head + total_enqueued_cbs)
+				& q->sw_ring_wrap_mask);
+		desc->req.data_ptrs[0].address = q->ring_addr_iova + fcw_offset;
+		desc->req.data_ptrs[0].blen = ACC101_FCW_LD_BLEN;
+		rte_memcpy(&desc->req.fcw_ld, &desc_first->req.fcw_ld,
+				ACC101_FCW_LD_BLEN);
+		ret = acc101_dma_desc_ld_fill(op, &desc->req, &input,
+				h_output, &in_offset, &h_out_offset,
+				&h_out_length,
+				&mbuf_total_left, &seg_total_left,
+				&desc->req.fcw_ld);
+
+		if (unlikely(ret < 0))
+			return ret;
+
+		/* Hard output */
+		mbuf_append(h_output_head, h_output, h_out_length);
+
+		/* Set total number of CBs in TB */
+		desc->req.cbs_in_tb = cbs_in_tb;
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+		rte_memdump(stderr, "FCW", &desc->req.fcw_td,
+				sizeof(desc->req.fcw_td) - 8);
+		rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+#endif
+		if (check_bit(op->ldpc_dec.op_flags,
+				RTE_BBDEV_LDPC_DEC_SCATTER_GATHER)
+				&& (seg_total_left == 0)) {
+			/* Go to the next mbuf */
+			input = input->next;
+			in_offset = 0;
+			h_output = h_output->next;
+			h_out_offset = 0;
+		}
+		total_enqueued_cbs++;
+		current_enqueued_cbs++;
+		r++;
+	}
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (check_mbuf_total_left(mbuf_total_left) != 0)
+		return -EINVAL;
+#endif
+	/* Set SDone on last CB descriptor for TB mode */
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	return current_enqueued_cbs;
+}
+
+/* Calculates number of CBs in processed encoder TB based on 'r' and input
+ * length.
+ */
+static inline uint8_t
+get_num_cbs_in_tb_ldpc_enc(struct rte_bbdev_op_ldpc_enc *ldpc_enc)
+{
+	uint8_t c, r, crc24_bits = 0;
+	uint16_t k = (ldpc_enc->basegraph == 1 ? 22 : 10) * ldpc_enc->z_c
+		- ldpc_enc->n_filler;
+	uint8_t cbs_in_tb = 0;
+	int32_t length;
+
+	length = ldpc_enc->input.length;
+	r = ldpc_enc->tb_params.r;
+	c = ldpc_enc->tb_params.c;
+	crc24_bits = 0;
+	if (check_bit(ldpc_enc->op_flags, RTE_BBDEV_LDPC_CRC_24B_ATTACH))
+		crc24_bits = 24;
+	while (length > 0 && r < c) {
+		length -= (k - crc24_bits) >> 3;
+		r++;
+		cbs_in_tb++;
+	}
+	return cbs_in_tb;
+}
+
+/* Calculates number of CBs in processed decoder TB based on 'r' and input
+ * length.
+ */
+static inline uint16_t
+get_num_cbs_in_tb_ldpc_dec(struct rte_bbdev_op_ldpc_dec *ldpc_dec)
+{
+	uint16_t r, cbs_in_tb = 0;
+	int32_t length = ldpc_dec->input.length;
+	r = ldpc_dec->tb_params.r;
+	while (length > 0 && r < ldpc_dec->tb_params.c) {
+		length -=  (r < ldpc_dec->tb_params.cab) ?
+				ldpc_dec->tb_params.ea :
+				ldpc_dec->tb_params.eb;
+		r++;
+		cbs_in_tb++;
+	}
+	return cbs_in_tb;
+}
+
+/* Number of available descriptor in ring to enqueue */
+static uint32_t
+acc101_ring_avail_enq(struct acc101_queue *q)
+{
+	return (q->sw_ring_depth - 1 + q->sw_ring_tail - q->sw_ring_head) % q->sw_ring_depth;
+}
+
+/* Number of available descriptor in ring to dequeue */
+static uint32_t
+acc101_ring_avail_deq(struct acc101_queue *q)
+{
+	return (q->sw_ring_depth + q->sw_ring_head - q->sw_ring_tail) % q->sw_ring_depth;
+}
+
+/* Check we can mux encode operations with common FCW */
+static inline int16_t
+check_mux(struct rte_bbdev_enc_op **ops, uint16_t num) {
+	uint16_t i;
+	if (num <= 1)
+		return 1;
+	for (i = 1; i < num; ++i) {
+		/* Only mux compatible code blocks */
+		if (memcmp((uint8_t *)(&ops[i]->ldpc_enc) + ACC101_ENC_OFFSET,
+				(uint8_t *)(&ops[0]->ldpc_enc) +
+				ACC101_ENC_OFFSET,
+				ACC101_CMP_ENC_SIZE) != 0)
+			return i;
+	}
+	/* Avoid multiplexing small inbound size frames */
+	int Kp = (ops[0]->ldpc_enc.basegraph == 1 ? 22 : 10) *
+			ops[0]->ldpc_enc.z_c - ops[0]->ldpc_enc.n_filler;
+	if (Kp  <= ACC101_LIMIT_DL_MUX_BITS)
+		return 1;
+	return num;
+}
+
+/** Enqueue encode operations for ACC101 device in CB mode. */
+static inline uint16_t
+acc101_enqueue_ldpc_enc_cb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i = 0;
+	union acc101_dma_desc *desc;
+	int ret, desc_idx = 0;
+	int16_t enq, left = num;
+
+	while (left > 0) {
+		if (unlikely(avail < 1))
+			break;
+		avail--;
+		enq = RTE_MIN(left, ACC101_MUX_5GDL_DESC);
+		enq = check_mux(&ops[i], enq);
+		if (enq > 1) {
+			ret = enqueue_ldpc_enc_n_op_cb(q, &ops[i],
+					desc_idx, enq);
+			if (ret < 0)
+				break;
+			i += enq;
+		} else {
+			ret = enqueue_ldpc_enc_one_op_cb(q, ops[i], desc_idx);
+			if (ret < 0)
+				break;
+			i++;
+		}
+		desc_idx++;
+		left = num - i;
+	}
+
+	if (unlikely(i == 0))
+		return 0; /* Nothing to enqueue */
+
+	/* Set SDone in last CB in enqueued ops for CB mode*/
+	desc = q->ring_addr + ((q->sw_ring_head + desc_idx - 1)
+			& q->sw_ring_wrap_mask);
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	acc101_dma_enqueue(q, desc_idx, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+
+	return i;
+}
+
+/* Enqueue LDPC encode operations for ACC101 device in TB mode. */
+static uint16_t
+acc101_enqueue_ldpc_enc_tb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i, enqueued_descs = 0;
+	uint8_t cbs_in_tb;
+	int descs_used;
+
+	for (i = 0; i < num; ++i) {
+		cbs_in_tb = get_num_cbs_in_tb_ldpc_enc(&ops[i]->ldpc_enc);
+		/* Check if there are available space for further processing */
+		if (unlikely(avail - cbs_in_tb < 0))
+			break;
+
+
+		descs_used = enqueue_ldpc_enc_one_op_tb(q, ops[i],
+				enqueued_descs, cbs_in_tb);
+		if (descs_used < 0)
+			break;
+		enqueued_descs += descs_used;
+		avail -= descs_used;
+	}
+	if (unlikely(enqueued_descs == 0))
+		return 0; /* Nothing to enqueue */
+
+	acc101_dma_enqueue(q, enqueued_descs, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+
+	return i;
+}
+
+/* Check room in AQ for the enqueues batches into Qmgr */
+static int32_t
+acc101_aq_avail(struct rte_bbdev_queue_data *q_data, uint16_t num_ops)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t aq_avail = q->aq_depth -
+			((q->aq_enqueued - q->aq_dequeued +
+			ACC101_MAX_QUEUE_DEPTH) % ACC101_MAX_QUEUE_DEPTH)
+			- (num_ops >> 7);
+	return aq_avail;
+}
+
+/* Enqueue encode operations for ACC101 device. */
+static uint16_t
+acc101_enqueue_ldpc_enc(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	uint16_t ret;
+	int32_t aq_avail = acc101_aq_avail(q_data, num);
+	if (unlikely((aq_avail <= 0) || (num == 0)))
+		return 0;
+	if (ops[0]->ldpc_enc.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+		ret = acc101_enqueue_ldpc_enc_tb(q_data, ops, num);
+	else
+		ret = acc101_enqueue_ldpc_enc_cb(q_data, ops, num);
+	return ret;
+}
+
+/* Enqueue decode operations for ACC101 device in TB mode */
+static uint16_t
+acc101_enqueue_ldpc_dec_tb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i, enqueued_cbs = 0;
+	uint8_t cbs_in_tb;
+	int ret;
+
+	for (i = 0; i < num; ++i) {
+		cbs_in_tb = get_num_cbs_in_tb_ldpc_dec(&ops[i]->ldpc_dec);
+		/* Check if there are available space for further processing */
+		if (unlikely(avail - cbs_in_tb < 0))
+			break;
+		avail -= cbs_in_tb;
+
+		ret = enqueue_ldpc_dec_one_op_tb(q, ops[i],
+				enqueued_cbs, cbs_in_tb);
+		if (ret < 0)
+			break;
+		enqueued_cbs += ret;
+	}
+	if (unlikely(enqueued_cbs == 0))
+		return 0; /* Nothing to enqueue */
+
+	acc101_dma_enqueue(q, enqueued_cbs, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+	return i;
+}
+
+/* Enqueue decode operations for ACC101 device in CB mode */
+static uint16_t
+acc101_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i;
+	union acc101_dma_desc *desc;
+	int ret;
+	bool same_op = false;
+	for (i = 0; i < num; ++i) {
+		/* Check if there are available space for further processing */
+		if (unlikely(avail < 1))
+			break;
+		avail -= 1;
+		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+			i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
+			ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
+			ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
+			ops[i]->ldpc_dec.n_cb, ops[i]->ldpc_dec.q_m,
+			ops[i]->ldpc_dec.n_filler, ops[i]->ldpc_dec.cb_params.e,
+			same_op);
+		ret = enqueue_ldpc_dec_one_op_cb(q, ops[i], i, same_op, q_data);
+		if (ret < 0)
+			break;
+	}
+
+	if (unlikely(i == 0))
+		return 0; /* Nothing to enqueue */
+
+	/* Set SDone in last CB in enqueued ops for CB mode*/
+	desc = q->ring_addr + ((q->sw_ring_head + i - 1)
+			& q->sw_ring_wrap_mask);
+
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	acc101_dma_enqueue(q, i, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+	return i;
+}
+
+/* Enqueue decode operations for ACC101 device. */
+static uint16_t
+acc101_enqueue_ldpc_dec(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	uint16_t ret;
+	int32_t aq_avail = acc101_aq_avail(q_data, num);
+	if (unlikely((aq_avail <= 0) || (num == 0)))
+		return 0;
+	if (ops[0]->ldpc_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+		ret = acc101_enqueue_ldpc_dec_tb(q_data, ops, num);
+	else
+		ret = acc101_enqueue_ldpc_dec_cb(q_data, ops, num);
+	return ret;
+}
+
+
+/* Dequeue one encode operations from ACC101 device in CB mode
+ */
+static inline int
+dequeue_enc_one_op_cb(struct acc101_queue *q, struct rte_bbdev_enc_op **ref_op,
+		uint16_t *dequeued_ops, uint32_t *aq_dequeued,
+		uint16_t *dequeued_descs)
+{
+	union acc101_dma_desc *desc, atom_desc;
+	union acc101_dma_rsp_desc rsp;
+	struct rte_bbdev_enc_op *op;
+	int i;
+	int desc_idx = ((q->sw_ring_tail + *dequeued_descs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+			__ATOMIC_RELAXED);
+
+	/* Check fdone bit */
+	if (!(atom_desc.rsp.val & ACC101_FDONE))
+		return -1;
+
+	rsp.val = atom_desc.rsp.val;
+	rte_bbdev_log_debug("Resp. desc %p: %x num %d\n",
+			desc, rsp.val, desc->req.numCBs);
+
+	/* Dequeue */
+	op = desc->req.op_addr;
+
+	/* Clearing status, it will be set based on response */
+	op->status = 0;
+	op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+	op->status |= ((rsp.fcw_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+
+	if (desc->req.last_desc_in_batch) {
+		(*aq_dequeued)++;
+		desc->req.last_desc_in_batch = 0;
+	}
+	desc->rsp.val = ACC101_DMA_DESC_TYPE;
+	desc->rsp.add_info_0 = 0; /*Reserved bits */
+	desc->rsp.add_info_1 = 0; /*Reserved bits */
+
+	ref_op[0] = op;
+	struct acc101_ptrs *context_ptrs = q->companion_ring_addr + desc_idx;
+	for (i = 1 ; i < desc->req.numCBs; i++)
+		ref_op[i] = context_ptrs->ptr[i].op_addr;
+
+	/* One op was successfully dequeued */
+	(*dequeued_descs)++;
+	*dequeued_ops += desc->req.numCBs;
+	return desc->req.numCBs;
+}
+
+/* Dequeue one LDPC encode operations from ACC101 device in TB mode
+ * That operation may cover multiple descriptors
+ */
+static inline int
+dequeue_enc_one_op_tb(struct acc101_queue *q, struct rte_bbdev_enc_op **ref_op,
+		uint16_t *dequeued_ops, uint32_t *aq_dequeued,
+		uint16_t *dequeued_descs)
+{
+	union acc101_dma_desc *desc, *last_desc, atom_desc;
+	union acc101_dma_rsp_desc rsp;
+	struct rte_bbdev_enc_op *op;
+	uint8_t i = 0;
+	uint16_t current_dequeued_descs = 0, descs_in_tb;
+
+	desc = q->ring_addr + ((q->sw_ring_tail + *dequeued_descs)
+			& q->sw_ring_wrap_mask);
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+			__ATOMIC_RELAXED);
+
+	/* Check fdone bit */
+	if (!(atom_desc.rsp.val & ACC101_FDONE))
+		return -1;
+
+	/* Get number of CBs in dequeued TB */
+	descs_in_tb = desc->req.cbs_in_tb;
+	/* Get last CB */
+	last_desc = q->ring_addr + ((q->sw_ring_tail
+			+ *dequeued_descs + descs_in_tb - 1)
+			& q->sw_ring_wrap_mask);
+	/* Check if last CB in TB is ready to dequeue (and thus
+	 * the whole TB) - checking sdone bit. If not return.
+	 */
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc,
+			__ATOMIC_RELAXED);
+	if (!(atom_desc.rsp.val & ACC101_SDONE))
+		return -1;
+
+	/* Dequeue */
+	op = desc->req.op_addr;
+
+	/* Clearing status, it will be set based on response */
+	op->status = 0;
+
+	while (i < descs_in_tb) {
+		desc = q->ring_addr + ((q->sw_ring_tail
+				+ *dequeued_descs)
+				& q->sw_ring_wrap_mask);
+		atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+				__ATOMIC_RELAXED);
+		rsp.val = atom_desc.rsp.val;
+		rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d\n",
+				desc,
+				rsp.val, descs_in_tb,
+				desc->req.numCBs);
+
+		op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+		op->status |= ((rsp.fcw_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+
+		if (desc->req.last_desc_in_batch) {
+			(*aq_dequeued)++;
+			desc->req.last_desc_in_batch = 0;
+		}
+		desc->rsp.val = ACC101_DMA_DESC_TYPE;
+		desc->rsp.add_info_0 = 0;
+		desc->rsp.add_info_1 = 0;
+		(*dequeued_descs)++;
+		current_dequeued_descs++;
+		i++;
+	}
+
+	*ref_op = op;
+	(*dequeued_ops)++;
+	return current_dequeued_descs;
+}
+
+
+/* Dequeue one decode operations from ACC101 device in CB mode */
+static inline int
+dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+		struct acc101_queue *q, struct rte_bbdev_dec_op **ref_op,
+		uint16_t dequeued_cbs, uint32_t *aq_dequeued)
+{
+	union acc101_dma_desc *desc, atom_desc;
+	union acc101_dma_rsp_desc rsp;
+	struct rte_bbdev_dec_op *op;
+
+	desc = q->ring_addr + ((q->sw_ring_tail + dequeued_cbs)
+			& q->sw_ring_wrap_mask);
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+			__ATOMIC_RELAXED);
+
+	/* Check fdone bit */
+	if (!(atom_desc.rsp.val & ACC101_FDONE))
+		return -1;
+
+	rsp.val = atom_desc.rsp.val;
+	rte_bbdev_log_debug("Resp. desc %p: %x\n", desc, rsp.val);
+
+	/* Dequeue */
+	op = desc->req.op_addr;
+
+	/* Clearing status, it will be set based on response */
+	op->status = 0;
+	op->status |= rsp.input_err << RTE_BBDEV_DATA_ERROR;
+	op->status |= rsp.dma_err << RTE_BBDEV_DRV_ERROR;
+	op->status |= rsp.fcw_err << RTE_BBDEV_DRV_ERROR;
+	if (op->status != 0)
+		q_data->queue_stats.dequeue_err_count++;
+
+	op->status |= rsp.crc_status << RTE_BBDEV_CRC_ERROR;
+	if (op->ldpc_dec.hard_output.length > 0 && !rsp.synd_ok)
+		op->status |= 1 << RTE_BBDEV_SYNDROME_ERROR;
+	op->ldpc_dec.iter_count = (uint8_t) rsp.iter_cnt;
+
+	/* Check if this is the last desc in batch (Atomic Queue) */
+	if (desc->req.last_desc_in_batch) {
+		(*aq_dequeued)++;
+		desc->req.last_desc_in_batch = 0;
+	}
+
+	desc->rsp.val = ACC101_DMA_DESC_TYPE;
+	desc->rsp.add_info_0 = 0;
+	desc->rsp.add_info_1 = 0;
+
+	*ref_op = op;
+
+	/* One CB (op) was successfully dequeued */
+	return 1;
+}
+
+/* Dequeue one decode operations from ACC101 device in TB mode. */
+static inline int
+dequeue_dec_one_op_tb(struct acc101_queue *q, struct rte_bbdev_dec_op **ref_op,
+		uint16_t dequeued_cbs, uint32_t *aq_dequeued)
+{
+	union acc101_dma_desc *desc, *last_desc, atom_desc;
+	union acc101_dma_rsp_desc rsp;
+	struct rte_bbdev_dec_op *op;
+	uint8_t cbs_in_tb = 1, cb_idx = 0;
+
+	desc = q->ring_addr + ((q->sw_ring_tail + dequeued_cbs)
+			& q->sw_ring_wrap_mask);
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+			__ATOMIC_RELAXED);
+
+	/* Check fdone bit */
+	if (!(atom_desc.rsp.val & ACC101_FDONE))
+		return -1;
+
+	/* Dequeue */
+	op = desc->req.op_addr;
+
+	/* Get number of CBs in dequeued TB */
+	cbs_in_tb = desc->req.cbs_in_tb;
+	/* Get last CB */
+	last_desc = q->ring_addr + ((q->sw_ring_tail
+			+ dequeued_cbs + cbs_in_tb - 1)
+			& q->sw_ring_wrap_mask);
+	/* Check if last CB in TB is ready to dequeue (and thus
+	 * the whole TB) - checking sdone bit. If not return.
+	 */
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc,
+			__ATOMIC_RELAXED);
+	if (!(atom_desc.rsp.val & ACC101_SDONE))
+		return -1;
+
+	/* Clearing status, it will be set based on response */
+	op->status = 0;
+
+	/* Read remaining CBs if exists */
+	while (cb_idx < cbs_in_tb) {
+		desc = q->ring_addr + ((q->sw_ring_tail + dequeued_cbs)
+				& q->sw_ring_wrap_mask);
+		atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+				__ATOMIC_RELAXED);
+		rsp.val = atom_desc.rsp.val;
+		rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d\n",
+				desc, rsp.val,
+				cb_idx, cbs_in_tb);
+
+		op->status |= ((rsp.input_err)
+				? (1 << RTE_BBDEV_DATA_ERROR) : 0);
+		op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+		op->status |= ((rsp.fcw_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+
+		/* CRC invalid if error exists */
+		if (!op->status)
+			op->status |= rsp.crc_status << RTE_BBDEV_CRC_ERROR;
+		op->turbo_dec.iter_count = RTE_MAX((uint8_t) rsp.iter_cnt,
+				op->turbo_dec.iter_count);
+
+		/* Check if this is the last desc in batch (Atomic Queue) */
+		if (desc->req.last_desc_in_batch) {
+			(*aq_dequeued)++;
+			desc->req.last_desc_in_batch = 0;
+		}
+		desc->rsp.val = ACC101_DMA_DESC_TYPE;
+		desc->rsp.add_info_0 = 0;
+		desc->rsp.add_info_1 = 0;
+		dequeued_cbs++;
+		cb_idx++;
+	}
+
+	*ref_op = op;
+
+	return cb_idx;
+}
+
+/* Dequeue LDPC encode operations from ACC101 device. */
+static uint16_t
+acc101_dequeue_ldpc_enc(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	uint32_t avail = acc101_ring_avail_deq(q);
+	uint32_t aq_dequeued = 0;
+	uint16_t i, dequeued_ops = 0, dequeued_descs = 0;
+	int ret;
+	struct rte_bbdev_enc_op *op;
+	if (avail == 0)
+		return 0;
+	op = (q->ring_addr + (q->sw_ring_tail &
+			q->sw_ring_wrap_mask))->req.op_addr;
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (unlikely(ops == NULL || q == NULL || op == NULL))
+		return 0;
+#endif
+	int cbm = op->ldpc_enc.code_block_mode;
+
+	for (i = 0; i < num; i++) {
+		if (cbm == RTE_BBDEV_TRANSPORT_BLOCK)
+			ret = dequeue_enc_one_op_tb(q, &ops[dequeued_ops],
+					&dequeued_ops, &aq_dequeued,
+					&dequeued_descs);
+		else
+			ret = dequeue_enc_one_op_cb(q, &ops[dequeued_ops],
+					&dequeued_ops, &aq_dequeued,
+					&dequeued_descs);
+		if (ret < 0)
+			break;
+		if (dequeued_ops >= num)
+			break;
+	}
+
+	q->aq_dequeued += aq_dequeued;
+	q->sw_ring_tail += dequeued_descs;
+
+	/* Update enqueue stats */
+	q_data->queue_stats.dequeued_count += dequeued_ops;
+	return dequeued_ops;
+}
+
+/* Dequeue decode operations from ACC101 device. */
+static uint16_t
+acc101_dequeue_ldpc_dec(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	uint16_t dequeue_num;
+	uint32_t avail = acc101_ring_avail_deq(q);
+	uint32_t aq_dequeued = 0;
+	uint16_t i;
+	uint16_t dequeued_cbs = 0;
+	struct rte_bbdev_dec_op *op;
+	int ret;
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (unlikely(ops == 0 && q == NULL))
+		return 0;
+#endif
+
+	dequeue_num = RTE_MIN(avail, num);
+
+	for (i = 0; i < dequeue_num; ++i) {
+		op = (q->ring_addr + ((q->sw_ring_tail + dequeued_cbs)
+			& q->sw_ring_wrap_mask))->req.op_addr;
+		if (op->ldpc_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+			ret = dequeue_dec_one_op_tb(q, &ops[i], dequeued_cbs,
+					&aq_dequeued);
+		else
+			ret = dequeue_ldpc_dec_one_op_cb(
+					q_data, q, &ops[i], dequeued_cbs,
+					&aq_dequeued);
+
+		if (ret < 0)
+			break;
+		dequeued_cbs += ret;
+	}
+
+	q->aq_dequeued += aq_dequeued;
+	q->sw_ring_tail += dequeued_cbs;
+
+	/* Update enqueue stats */
+	q_data->queue_stats.dequeued_count += i;
+	return i;
+}
+
 /* Initialization Function */
 static void
 acc101_bbdev_init(struct rte_bbdev *dev, struct rte_pci_driver *drv)
@@ -831,6 +2564,10 @@
 	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	dev->dev_ops = &acc101_bbdev_ops;
+	dev->enqueue_ldpc_enc_ops = acc101_enqueue_ldpc_enc;
+	dev->enqueue_ldpc_dec_ops = acc101_enqueue_ldpc_dec;
+	dev->dequeue_ldpc_enc_ops = acc101_dequeue_ldpc_enc;
+	dev->dequeue_ldpc_dec_ops = acc101_dequeue_ldpc_dec;
 
 	((struct acc101_device *) dev->data->dev_private)->pf_device =
 			!strcmp(drv->driver.name,
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
index 65cab8a..1cbbb1f 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.h
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -62,6 +62,10 @@
  * 128M x 32kB = 4GB addressable memory
  */
 #define ACC101_HARQ_LAYOUT             (128 * 1024 * 1024)
+/* Assume offset for HARQ in memory */
+#define ACC101_HARQ_OFFSET             (32 * 1024)
+#define ACC101_HARQ_OFFSET_SHIFT       15
+#define ACC101_HARQ_OFFSET_MASK        0x7ffffff
 /* Mask used to calculate an index in an Info Ring array (not a byte offset) */
 #define ACC101_INFO_RING_MASK          (ACC101_INFO_RING_NUM_ENTRIES-1)
 /* Number of Virtual Functions ACC101 supports */
@@ -76,6 +80,8 @@
 #define ACC101_GRP_ID_SHIFT    10 /* Queue Index Hierarchy */
 #define ACC101_VF_ID_SHIFT     4  /* Queue Index Hierarchy */
 #define ACC101_QUEUE_ENABLE    0x80000000  /* Bit to mark Queue as Enabled */
+#define ACC101_FDONE    0x80000000
+#define ACC101_SDONE    0x40000000
 
 #define ACC101_NUM_ACCS       5
 #define ACC101_ACCMAP_0       0
@@ -105,6 +111,26 @@
 #define ACC101_COMPANION_PTRS             8
 
 #define ACC101_FCW_VER         2
+#define ACC101_MUX_5GDL_DESC   6
+#define ACC101_CMP_ENC_SIZE    20
+#define ACC101_CMP_DEC_SIZE    24
+#define ACC101_ENC_OFFSET     (32)
+#define ACC101_DEC_OFFSET     (80)
+#define ACC101_EXT_MEM /* Default option with memory external to CPU */
+#define ACC101_HARQ_OFFSET_THRESHOLD 1024
+#define ACC101_LIMIT_DL_MUX_BITS 534
+
+/* Constants from K0 computation from 3GPP 38.212 Table 5.4.2.1-2 */
+#define ACC101_N_ZC_1 66 /* N = 66 Zc for BG 1 */
+#define ACC101_N_ZC_2 50 /* N = 50 Zc for BG 2 */
+#define ACC101_K_ZC_1 22 /* K = 22 Zc for BG 1 */
+#define ACC101_K_ZC_2 10 /* K = 10 Zc for BG 2 */
+#define ACC101_K0_1_1 17 /* K0 fraction numerator for rv 1 and BG 1 */
+#define ACC101_K0_1_2 13 /* K0 fraction numerator for rv 1 and BG 2 */
+#define ACC101_K0_2_1 33 /* K0 fraction numerator for rv 2 and BG 1 */
+#define ACC101_K0_2_2 25 /* K0 fraction numerator for rv 2 and BG 2 */
+#define ACC101_K0_3_1 56 /* K0 fraction numerator for rv 3 and BG 1 */
+#define ACC101_K0_3_2 43 /* K0 fraction numerator for rv 3 and BG 2 */
 
 #define ACC101_LONG_WAIT        1000
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 6/9] baseband/acc101: support HARQ loopback
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (4 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 5/9] baseband/acc101: add LDPC processing Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 7/9] baseband/acc101: support 4G processing Nicolas Chautru
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Add function to do HARQ loopback on top of
default 5G UL processing.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 doc/guides/bbdevs/acc101.rst             |   1 +
 drivers/baseband/acc101/rte_acc101_pmd.c | 157 +++++++++++++++++++++++++++++++
 2 files changed, 158 insertions(+)

diff --git a/doc/guides/bbdevs/acc101.rst b/doc/guides/bbdevs/acc101.rst
index ae27bf3..49f6c74 100644
--- a/doc/guides/bbdevs/acc101.rst
+++ b/doc/guides/bbdevs/acc101.rst
@@ -38,6 +38,7 @@ ACC101 5G/4G FEC PMD supports the following BBDEV capabilities:
    - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` :  provides an input for HARQ combining
    - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` :  HARQ memory input is internal
    - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` :  HARQ memory output is internal
+   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK`` :  loopback data to/from HARQ memory
    - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` :  HARQ memory includes the fillers bits
    - ``RTE_BBDEV_LDPC_DEC_SCATTER_GATHER`` :  supports scatter-gather for input/output data
    - ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION`` :  supports compression of the HARQ input/output
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index 2861907..2009c1a 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -784,6 +784,7 @@
 				RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE |
 				RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE |
 #ifdef ACC101_EXT_MEM
+				RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK |
 				RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE |
 				RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE |
 #endif
@@ -1730,6 +1731,157 @@ static inline uint32_t hq_index(uint32_t offset)
 	return return_descs;
 }
 
+static inline int
+harq_loopback(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+		uint16_t total_enqueued_cbs) {
+	struct acc101_fcw_ld *fcw;
+	union acc101_dma_desc *desc;
+	int next_triplet = 1;
+	struct rte_mbuf *hq_output_head, *hq_output;
+	uint16_t harq_dma_length_in, harq_dma_length_out;
+	uint16_t harq_in_length = op->ldpc_dec.harq_combined_input.length;
+	if (harq_in_length == 0) {
+		rte_bbdev_log(ERR, "Loopback of invalid null size\n");
+		return -EINVAL;
+	}
+
+	int h_comp = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION
+			) ? 1 : 0;
+	if (h_comp == 1) {
+		harq_in_length = harq_in_length * 8 / 6;
+		harq_in_length = RTE_ALIGN(harq_in_length, 64);
+		harq_dma_length_in = harq_in_length * 6 / 8;
+	} else {
+		harq_in_length = RTE_ALIGN(harq_in_length, 64);
+		harq_dma_length_in = harq_in_length;
+	}
+	harq_dma_length_out = harq_dma_length_in;
+
+	bool ddr_mem_in = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE);
+	union acc101_harq_layout_data *harq_layout = q->d->harq_layout;
+	uint32_t harq_index = hq_index(ddr_mem_in ?
+			op->ldpc_dec.harq_combined_input.offset :
+			op->ldpc_dec.harq_combined_output.offset);
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	fcw = &desc->req.fcw_ld;
+	/* Set the FCW from loopback into DDR */
+	memset(fcw, 0, sizeof(struct acc101_fcw_ld));
+	fcw->FCWversion = ACC101_FCW_VER;
+	fcw->qm = 2;
+	fcw->Zc = 384;
+	if (harq_in_length < 16 * ACC101_N_ZC_1)
+		fcw->Zc = 16;
+	fcw->ncb = fcw->Zc * ACC101_N_ZC_1;
+	fcw->rm_e = 2;
+	fcw->hcin_en = 1;
+	fcw->hcout_en = 1;
+
+	rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d\n",
+			ddr_mem_in, harq_index,
+			harq_layout[harq_index].offset, harq_in_length,
+			harq_dma_length_in);
+
+	if (ddr_mem_in && (harq_layout[harq_index].offset > 0)) {
+		fcw->hcin_size0 = harq_layout[harq_index].size0;
+		fcw->hcin_offset = harq_layout[harq_index].offset;
+		fcw->hcin_size1 = harq_in_length - fcw->hcin_offset;
+		harq_dma_length_in = (fcw->hcin_size0 + fcw->hcin_size1);
+		if (h_comp == 1)
+			harq_dma_length_in = harq_dma_length_in * 6 / 8;
+	} else {
+		fcw->hcin_size0 = harq_in_length;
+	}
+	harq_layout[harq_index].val = 0;
+	rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d\n",
+			fcw->hcin_size0, fcw->hcin_offset, fcw->hcin_size1);
+	fcw->hcout_size0 = harq_in_length;
+	fcw->hcin_decomp_mode = h_comp;
+	fcw->hcout_comp_mode = h_comp;
+	fcw->gain_i = 1;
+	fcw->gain_h = 1;
+
+	/* Set the prefix of descriptor. This could be done at polling */
+	acc101_header_init(&desc->req);
+
+	/* Null LLR input for Decoder */
+	desc->req.data_ptrs[next_triplet].address =
+			q->lb_in_addr_iova;
+	desc->req.data_ptrs[next_triplet].blen = 2;
+	desc->req.data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_IN;
+	desc->req.data_ptrs[next_triplet].last = 0;
+	desc->req.data_ptrs[next_triplet].dma_ext = 0;
+	next_triplet++;
+
+	/* HARQ Combine input from either Memory interface */
+	if (!ddr_mem_in) {
+		next_triplet = acc101_dma_fill_blk_type_out(&desc->req,
+				op->ldpc_dec.harq_combined_input.data,
+				op->ldpc_dec.harq_combined_input.offset,
+				harq_dma_length_in,
+				next_triplet,
+				ACC101_DMA_BLKID_IN_HARQ);
+	} else {
+		desc->req.data_ptrs[next_triplet].address =
+				op->ldpc_dec.harq_combined_input.offset;
+		desc->req.data_ptrs[next_triplet].blen =
+				harq_dma_length_in;
+		desc->req.data_ptrs[next_triplet].blkid =
+				ACC101_DMA_BLKID_IN_HARQ;
+		desc->req.data_ptrs[next_triplet].dma_ext = 1;
+		next_triplet++;
+	}
+	desc->req.data_ptrs[next_triplet - 1].last = 1;
+	desc->req.m2dlen = next_triplet;
+
+	/* Dropped decoder hard output */
+	desc->req.data_ptrs[next_triplet].address =
+			q->lb_out_addr_iova;
+	desc->req.data_ptrs[next_triplet].blen = ACC101_BYTES_IN_WORD;
+	desc->req.data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_OUT_HARD;
+	desc->req.data_ptrs[next_triplet].last = 0;
+	desc->req.data_ptrs[next_triplet].dma_ext = 0;
+	next_triplet++;
+
+	/* HARQ Combine output to either Memory interface */
+	if (check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE
+			)) {
+		desc->req.data_ptrs[next_triplet].address =
+				op->ldpc_dec.harq_combined_output.offset;
+		desc->req.data_ptrs[next_triplet].blen =
+				harq_dma_length_out;
+		desc->req.data_ptrs[next_triplet].blkid =
+				ACC101_DMA_BLKID_OUT_HARQ;
+		desc->req.data_ptrs[next_triplet].dma_ext = 1;
+		next_triplet++;
+	} else {
+		hq_output_head = op->ldpc_dec.harq_combined_output.data;
+		hq_output = op->ldpc_dec.harq_combined_output.data;
+		next_triplet = acc101_dma_fill_blk_type_out(
+				&desc->req,
+				op->ldpc_dec.harq_combined_output.data,
+				op->ldpc_dec.harq_combined_output.offset,
+				harq_dma_length_out,
+				next_triplet,
+				ACC101_DMA_BLKID_OUT_HARQ);
+		/* HARQ output */
+		mbuf_append(hq_output_head, hq_output, harq_dma_length_out);
+		op->ldpc_dec.harq_combined_output.length =
+				harq_dma_length_out;
+	}
+	desc->req.data_ptrs[next_triplet - 1].last = 1;
+	desc->req.d2mlen = next_triplet - desc->req.m2dlen;
+	desc->req.op_addr = op;
+
+	/* One CB (one op) was successfully prepared to enqueue */
+	return 1;
+}
+
 /** Enqueue one decode operations for ACC101 device in CB mode */
 static inline int
 enqueue_ldpc_dec_one_op_cb(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
@@ -1738,6 +1890,11 @@ static inline uint32_t hq_index(uint32_t offset)
 {
 	RTE_SET_USED(q_data);
 	int ret;
+	if (unlikely(check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK))) {
+		ret = harq_loopback(q, op, total_enqueued_cbs);
+		return ret;
+	}
 
 	union acc101_dma_desc *desc;
 	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 7/9] baseband/acc101: support 4G processing
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (5 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 6/9] baseband/acc101: support HARQ loopback Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 8/9] baseband/acc101: support MSI interrupt Nicolas Chautru
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Add capabilities and functions to support the  LTE
encoder and decode processing operations.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 doc/guides/bbdevs/acc101.rst             |   21 +
 doc/guides/bbdevs/features/acc101.ini    |    4 +-
 drivers/baseband/acc101/rte_acc101_pmd.c | 1012 +++++++++++++++++++++++++++++-
 3 files changed, 1019 insertions(+), 18 deletions(-)

diff --git a/doc/guides/bbdevs/acc101.rst b/doc/guides/bbdevs/acc101.rst
index 49f6c74..3daf399 100644
--- a/doc/guides/bbdevs/acc101.rst
+++ b/doc/guides/bbdevs/acc101.rst
@@ -17,6 +17,8 @@ ACC101 5G/4G FEC PMD supports the following features:
 
 - LDPC Encode in the DL (5GNR)
 - LDPC Decode in the UL (5GNR)
+- Turbo Encode in the DL (4G)
+- Turbo Decode in the UL (4G)
 - 16 VFs per PF (physical device)
 - Maximum of 128 queues per VF
 - PCIe Gen-3 x16 Interface
@@ -44,6 +46,25 @@ ACC101 5G/4G FEC PMD supports the following BBDEV capabilities:
    - ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION`` :  supports compression of the HARQ input/output
    - ``RTE_BBDEV_LDPC_LLR_COMPRESSION`` :  supports LLR input compression
 
+* For the turbo encode operation:
+   - ``RTE_BBDEV_TURBO_CRC_24B_ATTACH`` :  set to attach CRC24B to CB(s)
+   - ``RTE_BBDEV_TURBO_RATE_MATCH`` :  if set then do not do Rate Match bypass
+   - ``RTE_BBDEV_TURBO_ENC_INTERRUPTS`` :  set for encoder dequeue interrupts
+   - ``RTE_BBDEV_TURBO_RV_INDEX_BYPASS`` :  set to bypass RV index
+   - ``RTE_BBDEV_TURBO_ENC_SCATTER_GATHER`` :  supports scatter-gather for input/output data
+
+* For the turbo decode operation:
+   - ``RTE_BBDEV_TURBO_CRC_TYPE_24B`` :  check CRC24B from CB(s)
+   - ``RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE`` :  perform subblock de-interleave
+   - ``RTE_BBDEV_TURBO_DEC_INTERRUPTS`` :  set for decoder dequeue interrupts
+   - ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN`` :  set if negative LLR encoder i/p is supported
+   - ``RTE_BBDEV_TURBO_POS_LLR_1_BIT_IN`` :  set if positive LLR encoder i/p is supported
+   - ``RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP`` :  keep CRC24B bits appended while decoding
+   - ``RTE_BBDEV_TURBO_DEC_CRC_24B_DROP`` : option to drop the code block CRC after decoding
+   - ``RTE_BBDEV_TURBO_EARLY_TERMINATION`` :  set early termination feature
+   - ``RTE_BBDEV_TURBO_DEC_SCATTER_GATHER`` :  supports scatter-gather for input/output data
+   - ``RTE_BBDEV_TURBO_HALF_ITERATION_EVEN`` :  set half iteration granularity
+
 Installation
 ------------
 
diff --git a/doc/guides/bbdevs/features/acc101.ini b/doc/guides/bbdevs/features/acc101.ini
index 4a88932..0e2c21a 100644
--- a/doc/guides/bbdevs/features/acc101.ini
+++ b/doc/guides/bbdevs/features/acc101.ini
@@ -4,8 +4,8 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
-Turbo Decoder (4G)     = N
-Turbo Encoder (4G)     = N
+Turbo Decoder (4G)     = Y
+Turbo Encoder (4G)     = Y
 LDPC Decoder (5G)      = Y
 LDPC Encoder (5G)      = Y
 LLR/HARQ Compression   = Y
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index 2009c1a..2bb98b5 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -763,6 +763,41 @@
 	struct acc101_device *d = dev->data->dev_private;
 	static const struct rte_bbdev_op_cap bbdev_capabilities[] = {
 		{
+			.type = RTE_BBDEV_OP_TURBO_DEC,
+			.cap.turbo_dec = {
+				.capability_flags =
+					RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE |
+					RTE_BBDEV_TURBO_CRC_TYPE_24B |
+					RTE_BBDEV_TURBO_HALF_ITERATION_EVEN |
+					RTE_BBDEV_TURBO_EARLY_TERMINATION |
+					RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN |
+					RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP |
+					RTE_BBDEV_TURBO_DEC_CRC_24B_DROP |
+					RTE_BBDEV_TURBO_DEC_SCATTER_GATHER,
+				.max_llr_modulus = INT8_MAX,
+				.num_buffers_src =
+						RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+				.num_buffers_hard_out =
+						RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+				.num_buffers_soft_out =
+						RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+			}
+		},
+		{
+			.type = RTE_BBDEV_OP_TURBO_ENC,
+			.cap.turbo_enc = {
+				.capability_flags =
+					RTE_BBDEV_TURBO_CRC_24B_ATTACH |
+					RTE_BBDEV_TURBO_RV_INDEX_BYPASS |
+					RTE_BBDEV_TURBO_RATE_MATCH |
+					RTE_BBDEV_TURBO_ENC_SCATTER_GATHER,
+				.num_buffers_src =
+						RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+				.num_buffers_dst =
+						RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+			}
+		},
+		{
 			.type   = RTE_BBDEV_OP_LDPC_ENC,
 			.cap.ldpc_enc = {
 				.capability_flags =
@@ -888,6 +923,58 @@
 	return tail;
 }
 
+/* Fill in a frame control word for turbo encoding. */
+static inline void
+acc101_fcw_te_fill(const struct rte_bbdev_enc_op *op, struct acc101_fcw_te *fcw)
+{
+	fcw->code_block_mode = op->turbo_enc.code_block_mode;
+	if (fcw->code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) {
+		fcw->k_neg = op->turbo_enc.tb_params.k_neg;
+		fcw->k_pos = op->turbo_enc.tb_params.k_pos;
+		fcw->c_neg = op->turbo_enc.tb_params.c_neg;
+		fcw->c = op->turbo_enc.tb_params.c;
+		fcw->ncb_neg = op->turbo_enc.tb_params.ncb_neg;
+		fcw->ncb_pos = op->turbo_enc.tb_params.ncb_pos;
+
+		if (check_bit(op->turbo_enc.op_flags,
+				RTE_BBDEV_TURBO_RATE_MATCH)) {
+			fcw->bypass_rm = 0;
+			fcw->cab = op->turbo_enc.tb_params.cab;
+			fcw->ea = op->turbo_enc.tb_params.ea;
+			fcw->eb = op->turbo_enc.tb_params.eb;
+		} else {
+			/* E is set to the encoding output size when RM is
+			 * bypassed.
+			 */
+			fcw->bypass_rm = 1;
+			fcw->cab = fcw->c_neg;
+			fcw->ea = 3 * fcw->k_neg + 12;
+			fcw->eb = 3 * fcw->k_pos + 12;
+		}
+	} else { /* For CB mode */
+		fcw->k_pos = op->turbo_enc.cb_params.k;
+		fcw->ncb_pos = op->turbo_enc.cb_params.ncb;
+
+		if (check_bit(op->turbo_enc.op_flags,
+				RTE_BBDEV_TURBO_RATE_MATCH)) {
+			fcw->bypass_rm = 0;
+			fcw->eb = op->turbo_enc.cb_params.e;
+		} else {
+			/* E is set to the encoding output size when RM is
+			 * bypassed.
+			 */
+			fcw->bypass_rm = 1;
+			fcw->eb = 3 * fcw->k_pos + 12;
+		}
+	}
+
+	fcw->bypass_rv_idx1 = check_bit(op->turbo_enc.op_flags,
+			RTE_BBDEV_TURBO_RV_INDEX_BYPASS);
+	fcw->code_block_crc = check_bit(op->turbo_enc.op_flags,
+			RTE_BBDEV_TURBO_CRC_24B_ATTACH);
+	fcw->rv_idx1 = op->turbo_enc.rv_index;
+}
+
 /* Compute value of k0.
  * Based on 3GPP 38.212 Table 5.4.2.1-2
  * Starting position of different redundancy versions, k0
@@ -940,6 +1027,25 @@
 	fcw->mcb_count = num_cb;
 }
 
+/* Fill in a frame control word for turbo decoding. */
+static inline void
+acc101_fcw_td_fill(const struct rte_bbdev_dec_op *op, struct acc101_fcw_td *fcw)
+{
+	/* Note : Early termination is always enabled for 4GUL */
+	fcw->fcw_ver = 1;
+	if (op->turbo_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+		fcw->k_pos = op->turbo_dec.tb_params.k_pos;
+	else
+		fcw->k_pos = op->turbo_dec.cb_params.k;
+	fcw->turbo_crc_type = check_bit(op->turbo_dec.op_flags,
+			RTE_BBDEV_TURBO_CRC_TYPE_24B);
+	fcw->bypass_sb_deint = 0;
+	fcw->raw_decoder_input_on = 0;
+	fcw->max_iter = op->turbo_dec.iter_max;
+	fcw->half_iter_on = !check_bit(op->turbo_dec.op_flags,
+			RTE_BBDEV_TURBO_HALF_ITERATION_EVEN);
+}
+
 /* Convert offset to harq index for harq_layout structure */
 static inline uint32_t hq_index(uint32_t offset)
 {
@@ -1195,6 +1301,89 @@ static inline uint32_t hq_index(uint32_t offset)
 #endif
 
 static inline int
+acc101_dma_desc_te_fill(struct rte_bbdev_enc_op *op,
+		struct acc101_dma_req_desc *desc, struct rte_mbuf **input,
+		struct rte_mbuf *output, uint32_t *in_offset,
+		uint32_t *out_offset, uint32_t *out_length,
+		uint32_t *mbuf_total_left, uint32_t *seg_total_left, uint8_t r)
+{
+	int next_triplet = 1; /* FCW already done */
+	uint32_t e, ea, eb, length;
+	uint16_t k, k_neg, k_pos;
+	uint8_t cab, c_neg;
+
+	desc->word0 = ACC101_DMA_DESC_TYPE;
+	desc->word1 = 0; /**< Timestamp could be disabled */
+	desc->word2 = 0;
+	desc->word3 = 0;
+	desc->numCBs = 1;
+
+	if (op->turbo_enc.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) {
+		ea = op->turbo_enc.tb_params.ea;
+		eb = op->turbo_enc.tb_params.eb;
+		cab = op->turbo_enc.tb_params.cab;
+		k_neg = op->turbo_enc.tb_params.k_neg;
+		k_pos = op->turbo_enc.tb_params.k_pos;
+		c_neg = op->turbo_enc.tb_params.c_neg;
+		e = (r < cab) ? ea : eb;
+		k = (r < c_neg) ? k_neg : k_pos;
+	} else {
+		e = op->turbo_enc.cb_params.e;
+		k = op->turbo_enc.cb_params.k;
+	}
+
+	if (check_bit(op->turbo_enc.op_flags, RTE_BBDEV_TURBO_CRC_24B_ATTACH))
+		length = (k - 24) >> 3;
+	else
+		length = k >> 3;
+
+	if (unlikely((*mbuf_total_left == 0) || (*mbuf_total_left < length))) {
+		rte_bbdev_log(ERR,
+				"Mismatch between mbuf length and included CB sizes: mbuf len %u, cb len %u",
+				*mbuf_total_left, length);
+		return -1;
+	}
+
+	next_triplet = acc101_dma_fill_blk_type_in(desc, input, in_offset,
+			length, seg_total_left, next_triplet,
+			check_bit(op->turbo_enc.op_flags,
+			RTE_BBDEV_TURBO_ENC_SCATTER_GATHER));
+	if (unlikely(next_triplet < 0)) {
+		rte_bbdev_log(ERR,
+				"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+				op);
+		return -1;
+	}
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->m2dlen = next_triplet;
+	*mbuf_total_left -= length;
+
+	/* Set output length */
+	if (check_bit(op->turbo_enc.op_flags, RTE_BBDEV_TURBO_RATE_MATCH))
+		/* Integer round up division by 8 */
+		*out_length = (e + 7) >> 3;
+	else
+		*out_length = (k >> 3) * 3 + 2;
+
+	next_triplet = acc101_dma_fill_blk_type_out(desc, output, *out_offset,
+			*out_length, next_triplet, ACC101_DMA_BLKID_OUT_ENC);
+	if (unlikely(next_triplet < 0)) {
+		rte_bbdev_log(ERR,
+				"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+				op);
+		return -1;
+	}
+	op->turbo_enc.output.length += *out_length;
+	*out_offset += *out_length;
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->d2mlen = next_triplet - desc->m2dlen;
+
+	desc->op_addr = op;
+
+	return 0;
+}
+
+static inline int
 acc101_dma_desc_le_fill(struct rte_bbdev_enc_op *op,
 		struct acc101_dma_req_desc *desc, struct rte_mbuf **input,
 		struct rte_mbuf *output, uint32_t *in_offset,
@@ -1253,6 +1442,128 @@ static inline uint32_t hq_index(uint32_t offset)
 }
 
 static inline int
+acc101_dma_desc_td_fill(struct rte_bbdev_dec_op *op,
+		struct acc101_dma_req_desc *desc, struct rte_mbuf **input,
+		struct rte_mbuf *h_output, struct rte_mbuf *s_output,
+		uint32_t *in_offset, uint32_t *h_out_offset,
+		uint32_t *s_out_offset, uint32_t *h_out_length,
+		uint32_t *s_out_length, uint32_t *mbuf_total_left,
+		uint32_t *seg_total_left, uint8_t r)
+{
+	int next_triplet = 1; /* FCW already done */
+	uint16_t k;
+	uint16_t crc24_overlap = 0;
+	uint32_t e, kw;
+
+	desc->word0 = ACC101_DMA_DESC_TYPE;
+	desc->word1 = 0; /**< Timestamp could be disabled */
+	desc->word2 = 0;
+	desc->word3 = 0;
+	desc->numCBs = 1;
+
+	if (op->turbo_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) {
+		k = (r < op->turbo_dec.tb_params.c_neg)
+			? op->turbo_dec.tb_params.k_neg
+			: op->turbo_dec.tb_params.k_pos;
+		e = (r < op->turbo_dec.tb_params.cab)
+			? op->turbo_dec.tb_params.ea
+			: op->turbo_dec.tb_params.eb;
+	} else {
+		k = op->turbo_dec.cb_params.k;
+		e = op->turbo_dec.cb_params.e;
+	}
+
+	if ((op->turbo_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+			&& !check_bit(op->turbo_dec.op_flags,
+			RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP))
+		crc24_overlap = 24;
+	if ((op->turbo_dec.code_block_mode == RTE_BBDEV_CODE_BLOCK)
+			&& check_bit(op->turbo_dec.op_flags,
+			RTE_BBDEV_TURBO_DEC_CRC_24B_DROP))
+		crc24_overlap = 24;
+
+	/* Calculates circular buffer size.
+	 * According to 3gpp 36.212 section 5.1.4.2
+	 *   Kw = 3 * Kpi,
+	 * where:
+	 *   Kpi = nCol * nRow
+	 * where nCol is 32 and nRow can be calculated from:
+	 *   D =< nCol * nRow
+	 * where D is the size of each output from turbo encoder block (k + 4).
+	 */
+	kw = RTE_ALIGN_CEIL(k + 4, 32) * 3;
+
+	if (unlikely((*mbuf_total_left == 0) || (*mbuf_total_left < kw))) {
+		rte_bbdev_log(ERR,
+				"Mismatch between mbuf length and included CB sizes: mbuf len %u, cb len %u",
+				*mbuf_total_left, kw);
+		return -1;
+	}
+
+	next_triplet = acc101_dma_fill_blk_type_in(desc, input, in_offset, kw,
+			seg_total_left, next_triplet,
+			check_bit(op->turbo_dec.op_flags,
+			RTE_BBDEV_TURBO_DEC_SCATTER_GATHER));
+	if (unlikely(next_triplet < 0)) {
+		rte_bbdev_log(ERR,
+				"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+				op);
+		return -1;
+	}
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->m2dlen = next_triplet;
+	*mbuf_total_left -= kw;
+
+	next_triplet = acc101_dma_fill_blk_type_out(
+			desc, h_output, *h_out_offset,
+			(k - crc24_overlap) >> 3, next_triplet,
+			ACC101_DMA_BLKID_OUT_HARD);
+	if (unlikely(next_triplet < 0)) {
+		rte_bbdev_log(ERR,
+				"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+				op);
+		return -1;
+	}
+
+	*h_out_length = ((k - crc24_overlap) >> 3);
+	op->turbo_dec.hard_output.length += *h_out_length;
+	*h_out_offset += *h_out_length;
+
+	/* Soft output */
+	if (check_bit(op->turbo_dec.op_flags, RTE_BBDEV_TURBO_SOFT_OUTPUT)) {
+		if (op->turbo_dec.soft_output.data == 0) {
+			rte_bbdev_log(ERR, "Soft output is not defined");
+			return -1;
+		}
+		if (check_bit(op->turbo_dec.op_flags,
+				RTE_BBDEV_TURBO_EQUALIZER))
+			*s_out_length = e;
+		else
+			*s_out_length = (k * 3) + 12;
+
+		next_triplet = acc101_dma_fill_blk_type_out(desc, s_output,
+				*s_out_offset, *s_out_length, next_triplet,
+				ACC101_DMA_BLKID_OUT_SOFT);
+		if (unlikely(next_triplet < 0)) {
+			rte_bbdev_log(ERR,
+					"Mismatch between data to process and mbuf data length in bbdev_op: %p",
+					op);
+			return -1;
+		}
+
+		op->turbo_dec.soft_output.length += *s_out_length;
+		*s_out_offset += *s_out_length;
+	}
+
+	desc->data_ptrs[next_triplet - 1].last = 1;
+	desc->d2mlen = next_triplet - desc->m2dlen;
+
+	desc->op_addr = op;
+
+	return 0;
+}
+
+static inline int
 acc101_dma_desc_ld_fill(struct rte_bbdev_dec_op *op,
 		struct acc101_dma_req_desc *desc,
 		struct rte_mbuf **input, struct rte_mbuf *h_output,
@@ -1505,6 +1816,52 @@ static inline uint32_t hq_index(uint32_t offset)
 
 }
 
+/* Enqueue one encode operations for ACC101 device in CB mode */
+static inline int
+enqueue_enc_one_op_cb(struct acc101_queue *q, struct rte_bbdev_enc_op *op,
+		uint16_t total_enqueued_cbs)
+{
+	union acc101_dma_desc *desc = NULL;
+	int ret;
+	uint32_t in_offset, out_offset, out_length, mbuf_total_left,
+		seg_total_left;
+	struct rte_mbuf *input, *output_head, *output;
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	acc101_fcw_te_fill(op, &desc->req.fcw_te);
+
+	input = op->turbo_enc.input.data;
+	output_head = output = op->turbo_enc.output.data;
+	in_offset = op->turbo_enc.input.offset;
+	out_offset = op->turbo_enc.output.offset;
+	out_length = 0;
+	mbuf_total_left = op->turbo_enc.input.length;
+	seg_total_left = rte_pktmbuf_data_len(op->turbo_enc.input.data)
+			- in_offset;
+
+	ret = acc101_dma_desc_te_fill(op, &desc->req, &input, output,
+			&in_offset, &out_offset, &out_length, &mbuf_total_left,
+			&seg_total_left, 0);
+
+	if (unlikely(ret < 0))
+		return ret;
+
+	mbuf_append(output_head, output, out_length);
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	rte_memdump(stderr, "FCW", &desc->req.fcw_te,
+			sizeof(desc->req.fcw_te) - 8);
+	rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+	if (check_mbuf_total_left(mbuf_total_left) != 0)
+		return -EINVAL;
+#endif
+	/* One CB (one op) was successfully prepared to enqueue */
+	return 1;
+}
+
+
 /* Enqueue one encode operations for ACC101 device in CB mode
  * multiplexed on the same descriptor
  */
@@ -1667,6 +2024,88 @@ static inline uint32_t hq_index(uint32_t offset)
 	return 1;
 }
 
+
+/* Enqueue one encode operations for ACC101 device in TB mode. */
+static inline int
+enqueue_enc_one_op_tb(struct acc101_queue *q, struct rte_bbdev_enc_op *op,
+		uint16_t total_enqueued_cbs, uint8_t cbs_in_tb)
+{
+	union acc101_dma_desc *desc = NULL;
+	int ret;
+	uint8_t r, c;
+	uint32_t in_offset, out_offset, out_length, mbuf_total_left,
+		seg_total_left;
+	struct rte_mbuf *input, *output_head, *output;
+	uint16_t current_enqueued_cbs = 0;
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	uint64_t fcw_offset = (desc_idx << 8) + ACC101_DESC_FCW_OFFSET;
+	acc101_fcw_te_fill(op, &desc->req.fcw_te);
+
+	input = op->turbo_enc.input.data;
+	output_head = output = op->turbo_enc.output.data;
+	in_offset = op->turbo_enc.input.offset;
+	out_offset = op->turbo_enc.output.offset;
+	out_length = 0;
+	mbuf_total_left = op->turbo_enc.input.length;
+
+	c = op->turbo_enc.tb_params.c;
+	r = op->turbo_enc.tb_params.r;
+
+	while (mbuf_total_left > 0 && r < c) {
+		if (unlikely(input == 0)) {
+			rte_bbdev_log(ERR, "Not enough input segment");
+			return -EINVAL;
+		}
+		seg_total_left = rte_pktmbuf_data_len(input) - in_offset;
+		/* Set up DMA descriptor */
+		desc = q->ring_addr + ((q->sw_ring_head + total_enqueued_cbs)
+				& q->sw_ring_wrap_mask);
+		desc->req.data_ptrs[0].address = q->ring_addr_iova + fcw_offset;
+		desc->req.data_ptrs[0].blen = ACC101_FCW_TE_BLEN;
+
+		ret = acc101_dma_desc_te_fill(op, &desc->req, &input, output,
+				&in_offset, &out_offset, &out_length,
+				&mbuf_total_left, &seg_total_left, r);
+		if (unlikely(ret < 0))
+			return ret;
+		mbuf_append(output_head, output, out_length);
+
+		/* Set total number of CBs in TB */
+		desc->req.cbs_in_tb = cbs_in_tb;
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+		rte_memdump(stderr, "FCW", &desc->req.fcw_te,
+				sizeof(desc->req.fcw_te) - 8);
+		rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+#endif
+
+		if (seg_total_left == 0) {
+			/* Go to the next mbuf */
+			input = input->next;
+			in_offset = 0;
+			output = output->next;
+			out_offset = 0;
+		}
+
+		total_enqueued_cbs++;
+		current_enqueued_cbs++;
+		r++;
+	}
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (check_mbuf_total_left(mbuf_total_left) != 0)
+		return -EINVAL;
+#endif
+
+	/* Set SDone on last CB descriptor for TB mode. */
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	return current_enqueued_cbs;
+}
+
 /* Enqueue one encode operations for ACC101 device in TB mode.
  * returns the number of descs used
  */
@@ -1731,24 +2170,89 @@ static inline uint32_t hq_index(uint32_t offset)
 	return return_descs;
 }
 
+/** Enqueue one decode operations for ACC101 device in CB mode */
 static inline int
-harq_loopback(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
-		uint16_t total_enqueued_cbs) {
-	struct acc101_fcw_ld *fcw;
-	union acc101_dma_desc *desc;
-	int next_triplet = 1;
-	struct rte_mbuf *hq_output_head, *hq_output;
-	uint16_t harq_dma_length_in, harq_dma_length_out;
-	uint16_t harq_in_length = op->ldpc_dec.harq_combined_input.length;
-	if (harq_in_length == 0) {
-		rte_bbdev_log(ERR, "Loopback of invalid null size\n");
-		return -EINVAL;
-	}
+enqueue_dec_one_op_cb(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+		uint16_t total_enqueued_cbs)
+{
+	union acc101_dma_desc *desc = NULL;
+	int ret;
+	uint32_t in_offset, h_out_offset, s_out_offset, s_out_length,
+		h_out_length, mbuf_total_left, seg_total_left;
+	struct rte_mbuf *input, *h_output_head, *h_output,
+		*s_output_head, *s_output;
 
-	int h_comp = check_bit(op->ldpc_dec.op_flags,
-			RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION
-			) ? 1 : 0;
-	if (h_comp == 1) {
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	acc101_fcw_td_fill(op, &desc->req.fcw_td);
+
+	input = op->turbo_dec.input.data;
+	h_output_head = h_output = op->turbo_dec.hard_output.data;
+	s_output_head = s_output = op->turbo_dec.soft_output.data;
+	in_offset = op->turbo_dec.input.offset;
+	h_out_offset = op->turbo_dec.hard_output.offset;
+	s_out_offset = op->turbo_dec.soft_output.offset;
+	h_out_length = s_out_length = 0;
+	mbuf_total_left = op->turbo_dec.input.length;
+	seg_total_left = rte_pktmbuf_data_len(input) - in_offset;
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (unlikely(input == NULL)) {
+		rte_bbdev_log(ERR, "Invalid mbuf pointer");
+		return -EFAULT;
+	}
+#endif
+
+	/* Set up DMA descriptor */
+	desc = q->ring_addr + ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+
+	ret = acc101_dma_desc_td_fill(op, &desc->req, &input, h_output,
+			s_output, &in_offset, &h_out_offset, &s_out_offset,
+			&h_out_length, &s_out_length, &mbuf_total_left,
+			&seg_total_left, 0);
+
+	if (unlikely(ret < 0))
+		return ret;
+
+	/* Hard output */
+	mbuf_append(h_output_head, h_output, h_out_length);
+
+	/* Soft output */
+	if (check_bit(op->turbo_dec.op_flags, RTE_BBDEV_TURBO_SOFT_OUTPUT))
+		mbuf_append(s_output_head, s_output, s_out_length);
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	rte_memdump(stderr, "FCW", &desc->req.fcw_td,
+			sizeof(desc->req.fcw_td) - 8);
+	rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+	if (check_mbuf_total_left(mbuf_total_left) != 0)
+		return -EINVAL;
+#endif
+
+	/* One CB (one op) was successfully prepared to enqueue */
+	return 1;
+}
+
+static inline int
+harq_loopback(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+		uint16_t total_enqueued_cbs) {
+	struct acc101_fcw_ld *fcw;
+	union acc101_dma_desc *desc;
+	int next_triplet = 1;
+	struct rte_mbuf *hq_output_head, *hq_output;
+	uint16_t harq_dma_length_in, harq_dma_length_out;
+	uint16_t harq_in_length = op->ldpc_dec.harq_combined_input.length;
+	if (harq_in_length == 0) {
+		rte_bbdev_log(ERR, "Loopback of invalid null size\n");
+		return -EINVAL;
+	}
+
+	int h_comp = check_bit(op->ldpc_dec.op_flags,
+			RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION
+			) ? 1 : 0;
+	if (h_comp == 1) {
 		harq_in_length = harq_in_length * 8 / 6;
 		harq_in_length = RTE_ALIGN(harq_in_length, 64);
 		harq_dma_length_in = harq_in_length * 6 / 8;
@@ -2069,6 +2573,100 @@ static inline uint32_t hq_index(uint32_t offset)
 	return current_enqueued_cbs;
 }
 
+/* Enqueue one decode operations for ACC101 device in TB mode */
+static inline int
+enqueue_dec_one_op_tb(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+		uint16_t total_enqueued_cbs, uint8_t cbs_in_tb)
+{
+	union acc101_dma_desc *desc = NULL;
+	int ret;
+	uint8_t r, c;
+	uint32_t in_offset, h_out_offset, s_out_offset, s_out_length,
+		h_out_length, mbuf_total_left, seg_total_left;
+	struct rte_mbuf *input, *h_output_head, *h_output,
+		*s_output_head, *s_output;
+	uint16_t current_enqueued_cbs = 0;
+
+	uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+			& q->sw_ring_wrap_mask);
+	desc = q->ring_addr + desc_idx;
+	uint64_t fcw_offset = (desc_idx << 8) + ACC101_DESC_FCW_OFFSET;
+	acc101_fcw_td_fill(op, &desc->req.fcw_td);
+
+	input = op->turbo_dec.input.data;
+	h_output_head = h_output = op->turbo_dec.hard_output.data;
+	s_output_head = s_output = op->turbo_dec.soft_output.data;
+	in_offset = op->turbo_dec.input.offset;
+	h_out_offset = op->turbo_dec.hard_output.offset;
+	s_out_offset = op->turbo_dec.soft_output.offset;
+	h_out_length = s_out_length = 0;
+	mbuf_total_left = op->turbo_dec.input.length;
+	c = op->turbo_dec.tb_params.c;
+	r = op->turbo_dec.tb_params.r;
+
+	while (mbuf_total_left > 0 && r < c) {
+
+		seg_total_left = rte_pktmbuf_data_len(input) - in_offset;
+
+		/* Set up DMA descriptor */
+		desc = q->ring_addr + ((q->sw_ring_head + total_enqueued_cbs)
+				& q->sw_ring_wrap_mask);
+		desc->req.data_ptrs[0].address = q->ring_addr_iova + fcw_offset;
+		desc->req.data_ptrs[0].blen = ACC101_FCW_TD_BLEN;
+		ret = acc101_dma_desc_td_fill(op, &desc->req, &input,
+				h_output, s_output, &in_offset, &h_out_offset,
+				&s_out_offset, &h_out_length, &s_out_length,
+				&mbuf_total_left, &seg_total_left, r);
+
+		if (unlikely(ret < 0))
+			return ret;
+
+		/* Hard output */
+		mbuf_append(h_output_head, h_output, h_out_length);
+
+		/* Soft output */
+		if (check_bit(op->turbo_dec.op_flags,
+				RTE_BBDEV_TURBO_SOFT_OUTPUT))
+			mbuf_append(s_output_head, s_output, s_out_length);
+
+		/* Set total number of CBs in TB */
+		desc->req.cbs_in_tb = cbs_in_tb;
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+		rte_memdump(stderr, "FCW", &desc->req.fcw_td,
+				sizeof(desc->req.fcw_td) - 8);
+		rte_memdump(stderr, "Req Desc.", desc, sizeof(*desc));
+#endif
+
+		if (seg_total_left == 0) {
+			/* Go to the next mbuf */
+			input = input->next;
+			in_offset = 0;
+			h_output = h_output->next;
+			h_out_offset = 0;
+
+			if (check_bit(op->turbo_dec.op_flags,
+					RTE_BBDEV_TURBO_SOFT_OUTPUT)) {
+				s_output = s_output->next;
+				s_out_offset = 0;
+			}
+		}
+
+		total_enqueued_cbs++;
+		current_enqueued_cbs++;
+		r++;
+	}
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (check_mbuf_total_left(mbuf_total_left) != 0)
+		return -EINVAL;
+#endif
+	/* Set SDone on last CB descriptor for TB mode */
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	return current_enqueued_cbs;
+}
+
 /* Calculates number of CBs in processed encoder TB based on 'r' and input
  * length.
  */
@@ -2095,6 +2693,63 @@ static inline uint32_t hq_index(uint32_t offset)
 	return cbs_in_tb;
 }
 
+/* Calculates number of CBs in processed encoder TB based on 'r' and input
+ * length.
+ */
+static inline uint8_t
+get_num_cbs_in_tb_enc(struct rte_bbdev_op_turbo_enc *turbo_enc)
+{
+	uint8_t c, c_neg, r, crc24_bits = 0;
+	uint16_t k, k_neg, k_pos;
+	uint8_t cbs_in_tb = 0;
+	int32_t length;
+
+	length = turbo_enc->input.length;
+	r = turbo_enc->tb_params.r;
+	c = turbo_enc->tb_params.c;
+	c_neg = turbo_enc->tb_params.c_neg;
+	k_neg = turbo_enc->tb_params.k_neg;
+	k_pos = turbo_enc->tb_params.k_pos;
+	crc24_bits = 0;
+	if (check_bit(turbo_enc->op_flags, RTE_BBDEV_TURBO_CRC_24B_ATTACH))
+		crc24_bits = 24;
+	while (length > 0 && r < c) {
+		k = (r < c_neg) ? k_neg : k_pos;
+		length -= (k - crc24_bits) >> 3;
+		r++;
+		cbs_in_tb++;
+	}
+
+	return cbs_in_tb;
+}
+
+/* Calculates number of CBs in processed decoder TB based on 'r' and input
+ * length.
+ */
+static inline uint16_t
+get_num_cbs_in_tb_dec(struct rte_bbdev_op_turbo_dec *turbo_dec)
+{
+	uint8_t c, c_neg, r = 0;
+	uint16_t kw, k, k_neg, k_pos, cbs_in_tb = 0;
+	int32_t length;
+
+	length = turbo_dec->input.length;
+	r = turbo_dec->tb_params.r;
+	c = turbo_dec->tb_params.c;
+	c_neg = turbo_dec->tb_params.c_neg;
+	k_neg = turbo_dec->tb_params.k_neg;
+	k_pos = turbo_dec->tb_params.k_pos;
+	while (length > 0 && r < c) {
+		k = (r < c_neg) ? k_neg : k_pos;
+		kw = RTE_ALIGN_CEIL(k + 4, 32) * 3;
+		length -= kw;
+		r++;
+		cbs_in_tb++;
+	}
+
+	return cbs_in_tb;
+}
+
 /* Calculates number of CBs in processed decoder TB based on 'r' and input
  * length.
  */
@@ -2128,6 +2783,45 @@ static inline uint32_t hq_index(uint32_t offset)
 	return (q->sw_ring_depth + q->sw_ring_head - q->sw_ring_tail) % q->sw_ring_depth;
 }
 
+/* Enqueue encode operations for ACC101 device in CB mode. */
+static uint16_t
+acc101_enqueue_enc_cb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i;
+	union acc101_dma_desc *desc;
+	int ret;
+
+	for (i = 0; i < num; ++i) {
+		/* Check if there are available space for further processing */
+		if (unlikely(avail - 1 < 0))
+			break;
+		avail -= 1;
+
+		ret = enqueue_enc_one_op_cb(q, ops[i], i);
+		if (ret < 0)
+			break;
+	}
+
+	if (unlikely(i == 0))
+		return 0; /* Nothing to enqueue */
+
+	/* Set SDone in last CB in enqueued ops for CB mode*/
+	desc = q->ring_addr + ((q->sw_ring_head + i - 1)
+			& q->sw_ring_wrap_mask);
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	acc101_dma_enqueue(q, i, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+	return i;
+}
+
 /* Check we can mux encode operations with common FCW */
 static inline int16_t
 check_mux(struct rte_bbdev_enc_op **ops, uint16_t num) {
@@ -2202,6 +2896,41 @@ static inline uint32_t hq_index(uint32_t offset)
 	return i;
 }
 
+/* Enqueue encode operations for ACC101 device in TB mode. */
+static uint16_t
+acc101_enqueue_enc_tb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i, enqueued_cbs = 0;
+	uint8_t cbs_in_tb;
+	int ret;
+
+	for (i = 0; i < num; ++i) {
+		cbs_in_tb = get_num_cbs_in_tb_enc(&ops[i]->turbo_enc);
+		/* Check if there are available space for further processing */
+		if (unlikely(avail - cbs_in_tb < 0))
+			break;
+		avail -= cbs_in_tb;
+
+		ret = enqueue_enc_one_op_tb(q, ops[i], enqueued_cbs, cbs_in_tb);
+		if (ret < 0)
+			break;
+		enqueued_cbs += ret;
+	}
+	if (unlikely(enqueued_cbs == 0))
+		return 0; /* Nothing to enqueue */
+
+	acc101_dma_enqueue(q, enqueued_cbs, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+
+	return i;
+}
+
 /* Enqueue LDPC encode operations for ACC101 device in TB mode. */
 static uint16_t
 acc101_enqueue_ldpc_enc_tb(struct rte_bbdev_queue_data *q_data,
@@ -2253,6 +2982,22 @@ static inline uint32_t hq_index(uint32_t offset)
 
 /* Enqueue encode operations for ACC101 device. */
 static uint16_t
+acc101_enqueue_enc(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	uint16_t ret;
+	int32_t aq_avail = acc101_aq_avail(q_data, num);
+	if (unlikely((aq_avail <= 0) || (num == 0)))
+		return 0;
+	if (ops[0]->turbo_enc.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+		ret = acc101_enqueue_enc_tb(q_data, ops, num);
+	else
+		ret = acc101_enqueue_enc_cb(q_data, ops, num);
+	return ret;
+}
+
+/* Enqueue encode operations for ACC101 device. */
+static uint16_t
 acc101_enqueue_ldpc_enc(struct rte_bbdev_queue_data *q_data,
 		struct rte_bbdev_enc_op **ops, uint16_t num)
 {
@@ -2267,6 +3012,47 @@ static inline uint32_t hq_index(uint32_t offset)
 	return ret;
 }
 
+
+/* Enqueue decode operations for ACC101 device in CB mode */
+static uint16_t
+acc101_enqueue_dec_cb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i;
+	union acc101_dma_desc *desc;
+	int ret;
+
+	for (i = 0; i < num; ++i) {
+		/* Check if there are available space for further processing */
+		if (unlikely(avail - 1 < 0))
+			break;
+		avail -= 1;
+
+		ret = enqueue_dec_one_op_cb(q, ops[i], i);
+		if (ret < 0)
+			break;
+	}
+
+	if (unlikely(i == 0))
+		return 0; /* Nothing to enqueue */
+
+	/* Set SDone in last CB in enqueued ops for CB mode*/
+	desc = q->ring_addr + ((q->sw_ring_head + i - 1)
+			& q->sw_ring_wrap_mask);
+	desc->req.sdone_enable = 1;
+	desc->req.irq_enable = q->irq_enable;
+
+	acc101_dma_enqueue(q, i, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+
+	return i;
+}
+
 /* Enqueue decode operations for ACC101 device in TB mode */
 static uint16_t
 acc101_enqueue_ldpc_dec_tb(struct rte_bbdev_queue_data *q_data,
@@ -2348,6 +3134,56 @@ static inline uint32_t hq_index(uint32_t offset)
 	return i;
 }
 
+
+/* Enqueue decode operations for ACC101 device in TB mode */
+static uint16_t
+acc101_enqueue_dec_tb(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	int32_t avail = acc101_ring_avail_enq(q);
+	uint16_t i, enqueued_cbs = 0;
+	uint8_t cbs_in_tb;
+	int ret;
+
+	for (i = 0; i < num; ++i) {
+		cbs_in_tb = get_num_cbs_in_tb_dec(&ops[i]->turbo_dec);
+		/* Check if there are available space for further processing */
+		if (unlikely(avail - cbs_in_tb < 0))
+			break;
+		avail -= cbs_in_tb;
+
+		ret = enqueue_dec_one_op_tb(q, ops[i], enqueued_cbs, cbs_in_tb);
+		if (ret < 0)
+			break;
+		enqueued_cbs += ret;
+	}
+
+	acc101_dma_enqueue(q, enqueued_cbs, &q_data->queue_stats);
+
+	/* Update stats */
+	q_data->queue_stats.enqueued_count += i;
+	q_data->queue_stats.enqueue_err_count += num - i;
+
+	return i;
+}
+
+/* Enqueue decode operations for ACC101 device. */
+static uint16_t
+acc101_enqueue_dec(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	uint16_t ret;
+	int32_t aq_avail = acc101_aq_avail(q_data, num);
+	if (unlikely((aq_avail <= 0) || (num == 0)))
+		return 0;
+	if (ops[0]->turbo_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+		ret = acc101_enqueue_dec_tb(q_data, ops, num);
+	else
+		ret = acc101_enqueue_dec_cb(q_data, ops, num);
+	return ret;
+}
+
 /* Enqueue decode operations for ACC101 device. */
 static uint16_t
 acc101_enqueue_ldpc_dec(struct rte_bbdev_queue_data *q_data,
@@ -2493,6 +3329,58 @@ static inline uint32_t hq_index(uint32_t offset)
 }
 
 
+/* Dequeue one decode operation from ACC101 device in CB mode */
+static inline int
+dequeue_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+		struct acc101_queue *q, struct rte_bbdev_dec_op **ref_op,
+		uint16_t dequeued_cbs, uint32_t *aq_dequeued)
+{
+	union acc101_dma_desc *desc, atom_desc;
+	union acc101_dma_rsp_desc rsp;
+	struct rte_bbdev_dec_op *op;
+
+	desc = q->ring_addr + ((q->sw_ring_tail + dequeued_cbs)
+			& q->sw_ring_wrap_mask);
+	atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+			__ATOMIC_RELAXED);
+
+	/* Check fdone bit */
+	if (!(atom_desc.rsp.val & ACC101_FDONE))
+		return -1;
+
+	rsp.val = atom_desc.rsp.val;
+	rte_bbdev_log_debug("Resp. desc %p: %x", desc, rsp.val);
+
+	/* Dequeue */
+	op = desc->req.op_addr;
+
+	/* Clearing status, it will be set based on response */
+	op->status = 0;
+	op->status |= ((rsp.input_err)
+			? (1 << RTE_BBDEV_DATA_ERROR) : 0);
+	op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+	op->status |= ((rsp.fcw_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
+	if (op->status != 0)
+		q_data->queue_stats.dequeue_err_count++;
+
+	/* CRC invalid if error exists */
+	if (!op->status)
+		op->status |= rsp.crc_status << RTE_BBDEV_CRC_ERROR;
+	op->turbo_dec.iter_count = (uint8_t) rsp.iter_cnt / 2;
+	/* Check if this is the last desc in batch (Atomic Queue) */
+	if (desc->req.last_desc_in_batch) {
+		(*aq_dequeued)++;
+		desc->req.last_desc_in_batch = 0;
+	}
+	desc->rsp.val = ACC101_DMA_DESC_TYPE;
+	desc->rsp.add_info_0 = 0;
+	desc->rsp.add_info_1 = 0;
+	*ref_op = op;
+
+	/* One CB (op) was successfully dequeued */
+	return 1;
+}
+
 /* Dequeue one decode operations from ACC101 device in CB mode */
 static inline int
 dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -2625,6 +3513,50 @@ static inline uint32_t hq_index(uint32_t offset)
 	return cb_idx;
 }
 
+/* Dequeue encode operations from ACC101 device. */
+static uint16_t
+acc101_dequeue_enc(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_enc_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	uint32_t avail = acc101_ring_avail_deq(q);
+	uint32_t aq_dequeued = 0;
+	uint16_t i, dequeued_ops = 0, dequeued_descs = 0;
+	int ret;
+	struct rte_bbdev_enc_op *op;
+	if (avail == 0)
+		return 0;
+	op = (q->ring_addr + (q->sw_ring_tail &
+			q->sw_ring_wrap_mask))->req.op_addr;
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (unlikely(ops == NULL || q == NULL || op == NULL))
+		return 0;
+#endif
+	int cbm = op->turbo_enc.code_block_mode;
+
+	for (i = 0; i < num; i++) {
+		if (cbm == RTE_BBDEV_TRANSPORT_BLOCK)
+			ret = dequeue_enc_one_op_tb(q, &ops[dequeued_ops],
+					&dequeued_ops, &aq_dequeued,
+					&dequeued_descs);
+		else
+			ret = dequeue_enc_one_op_cb(q, &ops[dequeued_ops],
+					&dequeued_ops, &aq_dequeued,
+					&dequeued_descs);
+		if (ret < 0)
+			break;
+		if (dequeued_ops >= num)
+			break;
+	}
+
+	q->aq_dequeued += aq_dequeued;
+	q->sw_ring_tail += dequeued_descs;
+
+	/* Update enqueue stats */
+	q_data->queue_stats.dequeued_count += dequeued_ops;
+	return dequeued_ops;
+}
+
 /* Dequeue LDPC encode operations from ACC101 device. */
 static uint16_t
 acc101_dequeue_ldpc_enc(struct rte_bbdev_queue_data *q_data,
@@ -2671,6 +3603,50 @@ static inline uint32_t hq_index(uint32_t offset)
 
 /* Dequeue decode operations from ACC101 device. */
 static uint16_t
+acc101_dequeue_dec(struct rte_bbdev_queue_data *q_data,
+		struct rte_bbdev_dec_op **ops, uint16_t num)
+{
+	struct acc101_queue *q = q_data->queue_private;
+	uint16_t dequeue_num;
+	uint32_t avail = acc101_ring_avail_deq(q);
+	uint32_t aq_dequeued = 0;
+	uint16_t i;
+	uint16_t dequeued_cbs = 0;
+	struct rte_bbdev_dec_op *op;
+	int ret;
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+	if (unlikely(ops == 0 && q == NULL))
+		return 0;
+#endif
+
+	dequeue_num = (avail < num) ? avail : num;
+
+	for (i = 0; i < dequeue_num; ++i) {
+		op = (q->ring_addr + ((q->sw_ring_tail + dequeued_cbs)
+			& q->sw_ring_wrap_mask))->req.op_addr;
+		if (op->turbo_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK)
+			ret = dequeue_dec_one_op_tb(q, &ops[i], dequeued_cbs,
+					&aq_dequeued);
+		else
+			ret = dequeue_dec_one_op_cb(q_data, q, &ops[i],
+					dequeued_cbs, &aq_dequeued);
+
+		if (ret < 0)
+			break;
+		dequeued_cbs += ret;
+	}
+
+	q->aq_dequeued += aq_dequeued;
+	q->sw_ring_tail += dequeued_cbs;
+
+	/* Update enqueue stats */
+	q_data->queue_stats.dequeued_count += i;
+	return i;
+}
+
+/* Dequeue decode operations from ACC101 device. */
+static uint16_t
 acc101_dequeue_ldpc_dec(struct rte_bbdev_queue_data *q_data,
 		struct rte_bbdev_dec_op **ops, uint16_t num)
 {
@@ -2721,6 +3697,10 @@ static inline uint32_t hq_index(uint32_t offset)
 	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	dev->dev_ops = &acc101_bbdev_ops;
+	dev->enqueue_enc_ops = acc101_enqueue_enc;
+	dev->enqueue_dec_ops = acc101_enqueue_dec;
+	dev->dequeue_enc_ops = acc101_dequeue_enc;
+	dev->dequeue_dec_ops = acc101_dequeue_dec;
 	dev->enqueue_ldpc_enc_ops = acc101_enqueue_ldpc_enc;
 	dev->enqueue_ldpc_dec_ops = acc101_enqueue_ldpc_dec;
 	dev->dequeue_ldpc_enc_ops = acc101_dequeue_ldpc_enc;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 8/9] baseband/acc101: support MSI interrupt
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (6 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 7/9] baseband/acc101: support 4G processing Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-04 21:13 ` [PATCH v1 9/9] baseband/acc101: add device configure function Nicolas Chautru
  2022-04-05  6:48 ` [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Thomas Monjalon
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Adding capabiliti and functions to support MSI
interrupts, handler and info ring.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 drivers/baseband/acc101/rte_acc101_pmd.c | 309 ++++++++++++++++++++++++++++++-
 drivers/baseband/acc101/rte_acc101_pmd.h |  15 ++
 2 files changed, 321 insertions(+), 3 deletions(-)

diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index 2bb98b5..ac3d56d 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -352,6 +352,215 @@
 	free_base_addresses(base_addrs, i);
 }
 
+/*
+ * Find queue_id of a device queue based on details from the Info Ring.
+ * If a queue isn't found UINT16_MAX is returned.
+ */
+static inline uint16_t
+get_queue_id_from_ring_info(struct rte_bbdev_data *data,
+		const union acc101_info_ring_data ring_data)
+{
+	uint16_t queue_id;
+
+	for (queue_id = 0; queue_id < data->num_queues; ++queue_id) {
+		struct acc101_queue *acc101_q =
+				data->queues[queue_id].queue_private;
+		if (acc101_q != NULL && acc101_q->aq_id == ring_data.aq_id &&
+				acc101_q->qgrp_id == ring_data.qg_id &&
+				acc101_q->vf_id == ring_data.vf_id)
+			return queue_id;
+	}
+
+	return UINT16_MAX;
+}
+
+/* Checks PF Info Ring to find the interrupt cause and handles it accordingly */
+static inline void
+acc101_check_ir(struct acc101_device *acc101_dev)
+{
+	volatile union acc101_info_ring_data *ring_data;
+	uint16_t info_ring_head = acc101_dev->info_ring_head;
+	if (acc101_dev->info_ring == NULL)
+		return;
+
+	ring_data = acc101_dev->info_ring + (acc101_dev->info_ring_head &
+			ACC101_INFO_RING_MASK);
+
+	while (ring_data->valid) {
+		if ((ring_data->int_nb < ACC101_PF_INT_DMA_DL_DESC_IRQ) || (
+				ring_data->int_nb >
+				ACC101_PF_INT_DMA_DL5G_DESC_IRQ)) {
+			rte_bbdev_log(WARNING, "InfoRing: ITR:%d Info:0x%x",
+				ring_data->int_nb, ring_data->detailed_info);
+			/* Initialize Info Ring entry and move forward */
+			ring_data->val = 0;
+		}
+		info_ring_head++;
+		ring_data = acc101_dev->info_ring +
+				(info_ring_head & ACC101_INFO_RING_MASK);
+	}
+}
+
+/* Checks PF Info Ring to find the interrupt cause and handles it accordingly */
+static inline void
+acc101_pf_interrupt_handler(struct rte_bbdev *dev)
+{
+	struct acc101_device *acc101_dev = dev->data->dev_private;
+	volatile union acc101_info_ring_data *ring_data;
+	struct acc101_deq_intr_details deq_intr_det;
+
+	ring_data = acc101_dev->info_ring + (acc101_dev->info_ring_head &
+			ACC101_INFO_RING_MASK);
+
+	while (ring_data->valid) {
+
+		rte_bbdev_log_debug(
+				"ACC101 PF Interrupt received, Info Ring data: 0x%x",
+				ring_data->val);
+
+		switch (ring_data->int_nb) {
+		case ACC101_PF_INT_DMA_DL_DESC_IRQ:
+		case ACC101_PF_INT_DMA_UL_DESC_IRQ:
+		case ACC101_PF_INT_DMA_UL5G_DESC_IRQ:
+		case ACC101_PF_INT_DMA_DL5G_DESC_IRQ:
+			deq_intr_det.queue_id = get_queue_id_from_ring_info(
+					dev->data, *ring_data);
+			if (deq_intr_det.queue_id == UINT16_MAX) {
+				rte_bbdev_log(ERR,
+						"Couldn't find queue: aq_id: %u, qg_id: %u, vf_id: %u",
+						ring_data->aq_id,
+						ring_data->qg_id,
+						ring_data->vf_id);
+				return;
+			}
+			rte_bbdev_pmd_callback_process(dev,
+					RTE_BBDEV_EVENT_DEQUEUE, &deq_intr_det);
+			break;
+		default:
+			rte_bbdev_pmd_callback_process(dev,
+					RTE_BBDEV_EVENT_ERROR, NULL);
+			break;
+		}
+
+		/* Initialize Info Ring entry and move forward */
+		ring_data->val = 0;
+		++acc101_dev->info_ring_head;
+		ring_data = acc101_dev->info_ring +
+				(acc101_dev->info_ring_head &
+				ACC101_INFO_RING_MASK);
+	}
+}
+
+/* Checks VF Info Ring to find the interrupt cause and handles it accordingly */
+static inline void
+acc101_vf_interrupt_handler(struct rte_bbdev *dev)
+{
+	struct acc101_device *acc101_dev = dev->data->dev_private;
+	volatile union acc101_info_ring_data *ring_data;
+	struct acc101_deq_intr_details deq_intr_det;
+
+	ring_data = acc101_dev->info_ring + (acc101_dev->info_ring_head &
+			ACC101_INFO_RING_MASK);
+
+	while (ring_data->valid) {
+
+		rte_bbdev_log_debug(
+				"ACC101 VF Interrupt received, Info Ring data: 0x%x",
+				ring_data->val);
+
+		switch (ring_data->int_nb) {
+		case ACC101_VF_INT_DMA_DL_DESC_IRQ:
+		case ACC101_VF_INT_DMA_UL_DESC_IRQ:
+		case ACC101_VF_INT_DMA_UL5G_DESC_IRQ:
+		case ACC101_VF_INT_DMA_DL5G_DESC_IRQ:
+			/* VFs are not aware of their vf_id - it's set to 0 in
+			 * queue structures.
+			 */
+			ring_data->vf_id = 0;
+			deq_intr_det.queue_id = get_queue_id_from_ring_info(
+					dev->data, *ring_data);
+			if (deq_intr_det.queue_id == UINT16_MAX) {
+				rte_bbdev_log(ERR,
+						"Couldn't find queue: aq_id: %u, qg_id: %u",
+						ring_data->aq_id,
+						ring_data->qg_id);
+				return;
+			}
+			rte_bbdev_pmd_callback_process(dev,
+					RTE_BBDEV_EVENT_DEQUEUE, &deq_intr_det);
+			break;
+		default:
+			rte_bbdev_pmd_callback_process(dev,
+					RTE_BBDEV_EVENT_ERROR, NULL);
+			break;
+		}
+
+		/* Initialize Info Ring entry and move forward */
+		ring_data->valid = 0;
+		++acc101_dev->info_ring_head;
+		ring_data = acc101_dev->info_ring + (acc101_dev->info_ring_head
+				& ACC101_INFO_RING_MASK);
+	}
+}
+
+/* Interrupt handler triggered by ACC101 dev for handling specific interrupt */
+static void
+acc101_dev_interrupt_handler(void *cb_arg)
+{
+	struct rte_bbdev *dev = cb_arg;
+	struct acc101_device *acc101_dev = dev->data->dev_private;
+
+	/* Read info ring */
+	if (acc101_dev->pf_device)
+		acc101_pf_interrupt_handler(dev);
+	else
+		acc101_vf_interrupt_handler(dev);
+}
+
+/* Allocate and setup inforing */
+static int
+allocate_info_ring(struct rte_bbdev *dev)
+{
+	struct acc101_device *d = dev->data->dev_private;
+	const struct acc101_registry_addr *reg_addr;
+	rte_iova_t info_ring_iova;
+	uint32_t phys_low, phys_high;
+
+	if (d->info_ring != NULL)
+		return 0; /* Already configured */
+
+	/* Choose correct registry addresses for the device type */
+	if (d->pf_device)
+		reg_addr = &pf_reg_addr;
+	else
+		reg_addr = &vf_reg_addr;
+	/* Allocate InfoRing */
+	if (d->info_ring == NULL)
+		d->info_ring = rte_zmalloc_socket("Info Ring",
+				ACC101_INFO_RING_NUM_ENTRIES *
+				sizeof(*d->info_ring), RTE_CACHE_LINE_SIZE,
+				dev->data->socket_id);
+	if (d->info_ring == NULL) {
+		rte_bbdev_log(ERR,
+				"Failed to allocate Info Ring for %s:%u",
+				dev->device->driver->name,
+				dev->data->dev_id);
+		return -ENOMEM;
+	}
+	info_ring_iova = rte_malloc_virt2iova(d->info_ring);
+
+	/* Setup Info Ring */
+	phys_high = (uint32_t)(info_ring_iova >> 32);
+	phys_low  = (uint32_t)(info_ring_iova);
+	acc101_reg_write(d, reg_addr->info_ring_hi, phys_high);
+	acc101_reg_write(d, reg_addr->info_ring_lo, phys_low);
+	acc101_reg_write(d, reg_addr->info_ring_en, ACC101_REG_IRQ_EN_ALL);
+	d->info_ring_head = (acc101_reg_read(d, reg_addr->info_ring_ptr) &
+			0xFFF) / sizeof(union acc101_info_ring_data);
+	return 0;
+}
+
+
 /* Allocate 64MB memory used for all software rings */
 static int
 acc101_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id)
@@ -359,6 +568,7 @@
 	uint32_t phys_low, phys_high, value;
 	struct acc101_device *d = dev->data->dev_private;
 	const struct acc101_registry_addr *reg_addr;
+	int ret;
 
 	if (d->pf_device && !d->acc101_conf.pf_mode_en) {
 		rte_bbdev_log(NOTICE,
@@ -445,6 +655,14 @@
 	acc101_reg_write(d, reg_addr->tail_ptrs_dl4g_hi, phys_high);
 	acc101_reg_write(d, reg_addr->tail_ptrs_dl4g_lo, phys_low);
 
+	ret = allocate_info_ring(dev);
+	if (ret < 0) {
+		rte_bbdev_log(ERR, "Failed to allocate info_ring for %s:%u",
+				dev->device->driver->name,
+				dev->data->dev_id);
+		/* Continue */
+	}
+
 	if (d->harq_layout == NULL)
 		d->harq_layout = rte_zmalloc_socket("HARQ Layout",
 				ACC101_HARQ_LAYOUT * sizeof(*d->harq_layout),
@@ -468,13 +686,59 @@
 	return 0;
 }
 
+static int
+acc101_intr_enable(struct rte_bbdev *dev)
+{
+	int ret;
+	struct acc101_device *d = dev->data->dev_private;
+
+	/* Only MSI are currently supported */
+	if (rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_VFIO_MSI ||
+			rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_UIO) {
+
+		ret = allocate_info_ring(dev);
+		if (ret < 0) {
+			rte_bbdev_log(ERR,
+					"Couldn't allocate info ring for device: %s",
+					dev->data->name);
+			return ret;
+		}
+
+		ret = rte_intr_enable(dev->intr_handle);
+		if (ret < 0) {
+			rte_bbdev_log(ERR,
+					"Couldn't enable interrupts for device: %s",
+					dev->data->name);
+			rte_free(d->info_ring);
+			return ret;
+		}
+		ret = rte_intr_callback_register(dev->intr_handle,
+				acc101_dev_interrupt_handler, dev);
+		if (ret < 0) {
+			rte_bbdev_log(ERR,
+					"Couldn't register interrupt callback for device: %s",
+					dev->data->name);
+			rte_free(d->info_ring);
+			return ret;
+		}
+
+		return 0;
+	}
+
+	rte_bbdev_log(ERR, "ACC101 (%s) supports only VFIO MSI interrupts",
+			dev->data->name);
+	return -ENOTSUP;
+}
+
 /* Free memory used for software rings */
 static int
 acc101_dev_close(struct rte_bbdev *dev)
 {
 	struct acc101_device *d = dev->data->dev_private;
+	acc101_check_ir(d);
 	if (d->sw_rings_base != NULL) {
 		rte_free(d->tail_ptrs);
+		rte_free(d->info_ring);
 		rte_free(d->sw_rings_base);
 		rte_free(d->harq_layout);
 		d->sw_rings_base = NULL;
@@ -770,6 +1034,7 @@
 					RTE_BBDEV_TURBO_CRC_TYPE_24B |
 					RTE_BBDEV_TURBO_HALF_ITERATION_EVEN |
 					RTE_BBDEV_TURBO_EARLY_TERMINATION |
+					RTE_BBDEV_TURBO_DEC_INTERRUPTS |
 					RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN |
 					RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP |
 					RTE_BBDEV_TURBO_DEC_CRC_24B_DROP |
@@ -790,6 +1055,7 @@
 					RTE_BBDEV_TURBO_CRC_24B_ATTACH |
 					RTE_BBDEV_TURBO_RV_INDEX_BYPASS |
 					RTE_BBDEV_TURBO_RATE_MATCH |
+					RTE_BBDEV_TURBO_ENC_INTERRUPTS |
 					RTE_BBDEV_TURBO_ENC_SCATTER_GATHER,
 				.num_buffers_src =
 						RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
@@ -803,7 +1069,8 @@
 				.capability_flags =
 					RTE_BBDEV_LDPC_RATE_MATCH |
 					RTE_BBDEV_LDPC_CRC_24B_ATTACH |
-					RTE_BBDEV_LDPC_INTERLEAVER_BYPASS,
+					RTE_BBDEV_LDPC_INTERLEAVER_BYPASS |
+					RTE_BBDEV_LDPC_ENC_INTERRUPTS,
 				.num_buffers_src =
 						RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
 				.num_buffers_dst =
@@ -828,7 +1095,8 @@
 				RTE_BBDEV_LDPC_DECODE_BYPASS |
 				RTE_BBDEV_LDPC_DEC_SCATTER_GATHER |
 				RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION |
-				RTE_BBDEV_LDPC_LLR_COMPRESSION,
+				RTE_BBDEV_LDPC_LLR_COMPRESSION |
+				RTE_BBDEV_LDPC_DEC_INTERRUPTS,
 			.llr_size = 8,
 			.llr_decimals = 1,
 			.num_buffers_src =
@@ -877,15 +1145,45 @@
 #else
 	dev_info->harq_buffer_size = 0;
 #endif
+	acc101_check_ir(d);
+}
+
+static int
+acc101_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
+{
+	struct acc101_queue *q = dev->data->queues[queue_id].queue_private;
+
+	if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+			rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
+		return -ENOTSUP;
+
+	q->irq_enable = 1;
+	return 0;
+}
+
+static int
+acc101_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
+{
+	struct acc101_queue *q = dev->data->queues[queue_id].queue_private;
+
+	if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+			rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
+		return -ENOTSUP;
+
+	q->irq_enable = 0;
+	return 0;
 }
 
 static const struct rte_bbdev_ops acc101_bbdev_ops = {
 	.setup_queues = acc101_setup_queues,
+	.intr_enable = acc101_intr_enable,
 	.close = acc101_dev_close,
 	.info_get = acc101_dev_info_get,
 	.queue_setup = acc101_queue_setup,
 	.queue_release = acc101_queue_release,
 	.queue_stop = acc101_queue_stop,
+	.queue_intr_enable = acc101_queue_intr_enable,
+	.queue_intr_disable = acc101_queue_intr_disable
 };
 
 /* ACC101 PCI PF address map */
@@ -3360,8 +3658,10 @@ static inline uint32_t hq_index(uint32_t offset)
 			? (1 << RTE_BBDEV_DATA_ERROR) : 0);
 	op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
 	op->status |= ((rsp.fcw_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
-	if (op->status != 0)
+	if (op->status != 0) {
 		q_data->queue_stats.dequeue_err_count++;
+		acc101_check_ir(q->d);
+	}
 
 	/* CRC invalid if error exists */
 	if (!op->status)
@@ -3419,6 +3719,9 @@ static inline uint32_t hq_index(uint32_t offset)
 		op->status |= 1 << RTE_BBDEV_SYNDROME_ERROR;
 	op->ldpc_dec.iter_count = (uint8_t) rsp.iter_cnt;
 
+	if (op->status & (1 << RTE_BBDEV_DRV_ERROR))
+		acc101_check_ir(q->d);
+
 	/* Check if this is the last desc in batch (Atomic Queue) */
 	if (desc->req.last_desc_in_batch) {
 		(*aq_dequeued)++;
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
index 1cbbb1f..afa83a6 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.h
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -557,8 +557,15 @@ struct acc101_device {
 	void *sw_rings_base;  /* Base addr of un-aligned memory for sw rings */
 	void *sw_rings;  /* 64MBs of 64MB aligned memory for sw rings */
 	rte_iova_t sw_rings_iova;  /* IOVA address of sw_rings */
+	/* Virtual address of the info memory routed to the this function under
+	 * operation, whether it is PF or VF.
+	 * HW may DMA information data at this location asynchronously
+	 */
+	union acc101_info_ring_data *info_ring;
 
 	union acc101_harq_layout_data *harq_layout;
+	/* Virtual Info Ring head */
+	uint16_t info_ring_head;
 	/* Number of bytes available for each queue in device, depending on
 	 * how many queues are enabled with configure()
 	 */
@@ -577,4 +584,12 @@ struct acc101_device {
 	bool configured; /**< True if this ACC101 device is configured */
 };
 
+/**
+ * Structure with details about RTE_BBDEV_EVENT_DEQUEUE event. It's passed to
+ * the callback function.
+ */
+struct acc101_deq_intr_details {
+	uint16_t queue_id;
+};
+
 #endif /* _RTE_ACC101_PMD_H_ */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 9/9] baseband/acc101: add device configure function
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (7 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 8/9] baseband/acc101: support MSI interrupt Nicolas Chautru
@ 2022-04-04 21:13 ` Nicolas Chautru
  2022-04-05  6:48 ` [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Thomas Monjalon
  9 siblings, 0 replies; 12+ messages in thread
From: Nicolas Chautru @ 2022-04-04 21:13 UTC (permalink / raw)
  To: dev, gakhil
  Cc: trix, thomas, ray.kinsella, bruce.richardson, hemant.agrawal,
	mingshan.zhang, david.marchand, Nicolas Chautru

Add configure function to configure the device from the PF
within bbdev-test without dependency on pf_bb_config.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 app/test-bbdev/test_bbdev_perf.c         |  69 +++++++
 doc/guides/rel_notes/release_22_07.rst   |   4 +
 drivers/baseband/acc101/meson.build      |   2 +
 drivers/baseband/acc101/rte_acc101_cfg.h |  17 ++
 drivers/baseband/acc101/rte_acc101_pmd.c | 335 +++++++++++++++++++++++++++++++
 drivers/baseband/acc101/rte_acc101_pmd.h |  36 ++++
 drivers/baseband/acc101/version.map      |   7 +
 7 files changed, 470 insertions(+)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 0fa119a..6bb4222 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -65,6 +65,18 @@
 #define ACC100_QOS_GBR 0
 #endif
 
+#ifdef RTE_BASEBAND_ACC101
+#include <rte_acc101_cfg.h>
+#define ACC101PF_DRIVER_NAME   ("intel_acc101_pf")
+#define ACC101VF_DRIVER_NAME   ("intel_acc101_vf")
+#define ACC101_QMGR_NUM_AQS 16
+#define ACC101_QMGR_NUM_QGS 2
+#define ACC101_QMGR_AQ_DEPTH 5
+#define ACC101_QMGR_INVALID_IDX -1
+#define ACC101_QMGR_RR 1
+#define ACC101_QOS_GBR 0
+#endif
+
 #define OPS_CACHE_SIZE 256U
 #define OPS_POOL_SIZE_MIN 511U /* 0.5K per queue */
 
@@ -766,6 +778,63 @@ typedef int (test_case_function)(struct active_device *ad,
 				info->dev_name);
 	}
 #endif
+#ifdef RTE_BASEBAND_ACC101
+	if ((get_init_device() == true) &&
+		(!strcmp(info->drv.driver_name, ACC101PF_DRIVER_NAME))) {
+		struct rte_acc101_conf conf;
+		unsigned int i;
+
+		printf("Configure ACC101 FEC Driver %s with default values\n",
+				info->drv.driver_name);
+
+		/* clear default configuration before initialization */
+		memset(&conf, 0, sizeof(struct rte_acc101_conf));
+
+		/* Always set in PF mode for built-in configuration */
+		conf.pf_mode_en = true;
+		for (i = 0; i < RTE_ACC101_NUM_VFS; ++i) {
+			conf.arb_dl_4g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_dl_4g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_dl_4g[i].round_robin_weight = ACC101_QMGR_RR;
+			conf.arb_ul_4g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_ul_4g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_ul_4g[i].round_robin_weight = ACC101_QMGR_RR;
+			conf.arb_dl_5g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_dl_5g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_dl_5g[i].round_robin_weight = ACC101_QMGR_RR;
+			conf.arb_ul_5g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_ul_5g[i].gbr_threshold1 = ACC101_QOS_GBR;
+			conf.arb_ul_5g[i].round_robin_weight = ACC101_QMGR_RR;
+		}
+
+		conf.input_pos_llr_1_bit = true;
+		conf.output_pos_llr_1_bit = true;
+		conf.num_vf_bundles = 1; /**< Number of VF bundles to setup */
+
+		conf.q_ul_4g.num_qgroups = ACC101_QMGR_NUM_QGS;
+		conf.q_ul_4g.first_qgroup_index = ACC101_QMGR_INVALID_IDX;
+		conf.q_ul_4g.num_aqs_per_groups = ACC101_QMGR_NUM_AQS;
+		conf.q_ul_4g.aq_depth_log2 = ACC101_QMGR_AQ_DEPTH;
+		conf.q_dl_4g.num_qgroups = ACC101_QMGR_NUM_QGS;
+		conf.q_dl_4g.first_qgroup_index = ACC101_QMGR_INVALID_IDX;
+		conf.q_dl_4g.num_aqs_per_groups = ACC101_QMGR_NUM_AQS;
+		conf.q_dl_4g.aq_depth_log2 = ACC101_QMGR_AQ_DEPTH;
+		conf.q_ul_5g.num_qgroups = ACC101_QMGR_NUM_QGS;
+		conf.q_ul_5g.first_qgroup_index = ACC101_QMGR_INVALID_IDX;
+		conf.q_ul_5g.num_aqs_per_groups = ACC101_QMGR_NUM_AQS;
+		conf.q_ul_5g.aq_depth_log2 = ACC101_QMGR_AQ_DEPTH;
+		conf.q_dl_5g.num_qgroups = ACC101_QMGR_NUM_QGS;
+		conf.q_dl_5g.first_qgroup_index = ACC101_QMGR_INVALID_IDX;
+		conf.q_dl_5g.num_aqs_per_groups = ACC101_QMGR_NUM_AQS;
+		conf.q_dl_5g.aq_depth_log2 = ACC101_QMGR_AQ_DEPTH;
+
+		/* setup PF with configuration information */
+		ret = rte_acc101_configure(info->dev_name, &conf);
+		TEST_ASSERT_SUCCESS(ret,
+				"Failed to configure ACC101 PF for bbdev %s",
+				info->dev_name);
+	}
+#endif
 	/* Let's refresh this now this is configured */
 	rte_bbdev_info_get(dev_id, info);
 	nb_queues = RTE_MIN(rte_lcore_count(), info->drv.max_num_queues);
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d..ef9906b 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added Intel ACC101 baseband PMD.**
+
+  * Added a new baseband PMD for Intel ACC101 device (Mount Cirrus).
+  * See the :doc:`../bbdevs/acc101` for more details.
 
 Removed Items
 -------------
diff --git a/drivers/baseband/acc101/meson.build b/drivers/baseband/acc101/meson.build
index e94eb2b..e92667a 100644
--- a/drivers/baseband/acc101/meson.build
+++ b/drivers/baseband/acc101/meson.build
@@ -4,3 +4,5 @@
 deps += ['bbdev', 'bus_vdev', 'ring', 'pci', 'bus_pci']
 
 sources = files('rte_acc101_pmd.c')
+
+headers = files('rte_acc101_cfg.h')
diff --git a/drivers/baseband/acc101/rte_acc101_cfg.h b/drivers/baseband/acc101/rte_acc101_cfg.h
index 4881cd6..8892f2f 100644
--- a/drivers/baseband/acc101/rte_acc101_cfg.h
+++ b/drivers/baseband/acc101/rte_acc101_cfg.h
@@ -89,6 +89,23 @@ struct rte_acc101_conf {
 	struct rte_acc101_arbitration arb_dl_5g[RTE_ACC101_NUM_VFS];
 };
 
+/**
+ * Configure a ACC101 device
+ *
+ * @param dev_name
+ *   The name of the device. This is the short form of PCI BDF, e.g. 00:01.0.
+ *   It can also be retrieved for a bbdev device from the dev_name field in the
+ *   rte_bbdev_info structure returned by rte_bbdev_info_get().
+ * @param conf
+ *   Configuration to apply to ACC101 HW.
+ *
+ * @return
+ *   Zero on success, negative value on failure.
+ */
+__rte_experimental
+int
+rte_acc101_configure(const char *dev_name, struct rte_acc101_conf *conf);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index ac3d56d..b31a7ff 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -84,6 +84,26 @@
 
 enum {UL_4G = 0, UL_5G, DL_4G, DL_5G, NUM_ACC};
 
+/* Return the accelerator enum for a Queue Group Index */
+static inline int
+accFromQgid(int qg_idx, const struct rte_acc101_conf *acc101_conf)
+{
+	int accQg[ACC101_NUM_QGRPS];
+	int NumQGroupsPerFn[NUM_ACC];
+	int acc, qgIdx, qgIndex = 0;
+	for (qgIdx = 0; qgIdx < ACC101_NUM_QGRPS; qgIdx++)
+		accQg[qgIdx] = 0;
+	NumQGroupsPerFn[UL_4G] = acc101_conf->q_ul_4g.num_qgroups;
+	NumQGroupsPerFn[UL_5G] = acc101_conf->q_ul_5g.num_qgroups;
+	NumQGroupsPerFn[DL_4G] = acc101_conf->q_dl_4g.num_qgroups;
+	NumQGroupsPerFn[DL_5G] = acc101_conf->q_dl_5g.num_qgroups;
+	for (acc = UL_4G;  acc < NUM_ACC; acc++)
+		for (qgIdx = 0; qgIdx < NumQGroupsPerFn[acc]; qgIdx++)
+			accQg[qgIndex++] = acc;
+	acc = accQg[qg_idx];
+	return acc;
+}
+
 /* Return the queue topology for a Queue Group Index */
 static inline void
 qtopFromAcc(struct rte_acc101_queue_topology **qtop, int acc_enum,
@@ -112,6 +132,30 @@
 	*qtop = p_qtop;
 }
 
+/* Return the AQ depth for a Queue Group Index */
+static inline int
+aqDepth(int qg_idx, struct rte_acc101_conf *acc101_conf)
+{
+	struct rte_acc101_queue_topology *q_top = NULL;
+	int acc_enum = accFromQgid(qg_idx, acc101_conf);
+	qtopFromAcc(&q_top, acc_enum, acc101_conf);
+	if (unlikely(q_top == NULL))
+		return 1;
+	return q_top->aq_depth_log2;
+}
+
+/* Return the AQ depth for a Queue Group Index */
+static inline int
+aqNum(int qg_idx, struct rte_acc101_conf *acc101_conf)
+{
+	struct rte_acc101_queue_topology *q_top = NULL;
+	int acc_enum = accFromQgid(qg_idx, acc101_conf);
+	qtopFromAcc(&q_top, acc_enum, acc101_conf);
+	if (unlikely(q_top == NULL))
+		return 0;
+	return q_top->num_aqs_per_groups;
+}
+
 static void
 initQTop(struct rte_acc101_conf *acc101_conf)
 {
@@ -4120,3 +4164,294 @@ static int acc101_pci_remove(struct rte_pci_device *pci_dev)
 RTE_PMD_REGISTER_PCI_TABLE(ACC101PF_DRIVER_NAME, pci_id_acc101_pf_map);
 RTE_PMD_REGISTER_PCI(ACC101VF_DRIVER_NAME, acc101_pci_vf_driver);
 RTE_PMD_REGISTER_PCI_TABLE(ACC101VF_DRIVER_NAME, pci_id_acc101_vf_map);
+
+/* Initial configuration of a ACC101 device prior to running configure() */
+int
+rte_acc101_configure(const char *dev_name, struct rte_acc101_conf *conf)
+{
+	rte_bbdev_log(INFO, "rte_acc101_configure");
+	uint32_t value, address, status;
+	int qg_idx, template_idx, vf_idx, acc, i;
+	struct rte_bbdev *bbdev = rte_bbdev_get_named_dev(dev_name);
+
+	/* Compile time checks */
+	RTE_BUILD_BUG_ON(sizeof(struct acc101_dma_req_desc) != 256);
+	RTE_BUILD_BUG_ON(sizeof(union acc101_dma_desc) != 256);
+	RTE_BUILD_BUG_ON(sizeof(struct acc101_fcw_td) != 24);
+	RTE_BUILD_BUG_ON(sizeof(struct acc101_fcw_te) != 32);
+
+	if (bbdev == NULL) {
+		rte_bbdev_log(ERR,
+		"Invalid dev_name (%s), or device is not yet initialised",
+		dev_name);
+		return -ENODEV;
+	}
+	struct acc101_device *d = bbdev->data->dev_private;
+
+	/* Store configuration */
+	rte_memcpy(&d->acc101_conf, conf, sizeof(d->acc101_conf));
+
+	/* PCIe Bridge configuration */
+	acc101_reg_write(d, HwPfPcieGpexBridgeControl, ACC101_CFG_PCI_BRIDGE);
+	for (i = 1; i < ACC101_GPEX_AXIMAP_NUM; i++)
+		acc101_reg_write(d, HwPfPcieGpexAxiAddrMappingWindowPexBaseHigh + i * 16, 0);
+
+	/* Prevent blocking AXI read on BRESP for AXI Write */
+	address = HwPfPcieGpexAxiPioControl;
+	value = ACC101_CFG_PCI_AXI;
+	acc101_reg_write(d, address, value);
+
+	/* Explicitly releasing AXI as this may be stopped after PF FLR/BME */
+	usleep(2000);
+	acc101_reg_write(d, HWPfDmaAxiControl, 1);
+
+	/* Set the default 5GDL DMA configuration */
+	acc101_reg_write(d, HWPfDmaInboundDrainDataSize, ACC101_DMA_INBOUND);
+
+	/* Enable granular dynamic clock gating */
+	address = HWPfHiClkGateHystReg;
+	value = ACC101_CLOCK_GATING_EN;
+	acc101_reg_write(d, address, value);
+
+	/* Set default descriptor signature */
+	address = HWPfDmaDescriptorSignatuture;
+	value = 0;
+	acc101_reg_write(d, address, value);
+
+	/* Enable the Error Detection in DMA */
+	value = ACC101_CFG_DMA_ERROR;
+	address = HWPfDmaErrorDetectionEn;
+	acc101_reg_write(d, address, value);
+
+	/* AXI Cache configuration */
+	value = ACC101_CFG_AXI_CACHE;
+	address = HWPfDmaAxcacheReg;
+	acc101_reg_write(d, address, value);
+
+	/* Default DMA Configuration (Qmgr Enabled) */
+	address = HWPfDmaConfig0Reg;
+	value = 0;
+	acc101_reg_write(d, address, value);
+	address = HWPfDmaQmanen;
+	value = 0;
+	acc101_reg_write(d, address, value);
+
+	/* Default RLIM/ALEN configuration */
+	address = HWPfDmaConfig1Reg;
+	int alen_r = 0xF;
+	int alen_w = 0x7;
+	value = (1 << 31) + (alen_w << 20)  + (1 << 6) + alen_r;
+	acc101_reg_write(d, address, value);
+
+	/* Configure DMA Qmanager addresses */
+	address = HWPfDmaQmgrAddrReg;
+	value = HWPfQmgrEgressQueuesTemplate;
+	acc101_reg_write(d, address, value);
+
+	/* ===== Qmgr Configuration ===== */
+	/* Configuration of the AQueue Depth QMGR_GRP_0_DEPTH_LOG2 for UL */
+	int totalQgs = conf->q_ul_4g.num_qgroups +
+			conf->q_ul_5g.num_qgroups +
+			conf->q_dl_4g.num_qgroups +
+			conf->q_dl_5g.num_qgroups;
+	for (qg_idx = 0; qg_idx < totalQgs; qg_idx++) {
+		address = HWPfQmgrDepthLog2Grp +
+		ACC101_BYTES_IN_WORD * qg_idx;
+		value = aqDepth(qg_idx, conf);
+		acc101_reg_write(d, address, value);
+		address = HWPfQmgrTholdGrp +
+		ACC101_BYTES_IN_WORD * qg_idx;
+		value = (1 << 16) + (1 << (aqDepth(qg_idx, conf) - 1));
+		acc101_reg_write(d, address, value);
+	}
+
+	/* Template Priority in incremental order */
+	for (template_idx = 0; template_idx < ACC101_NUM_TMPL;
+			template_idx++) {
+		address = HWPfQmgrGrpTmplateReg0Indx + ACC101_BYTES_IN_WORD * template_idx;
+		value = ACC101_TMPL_PRI_0;
+		acc101_reg_write(d, address, value);
+		address = HWPfQmgrGrpTmplateReg1Indx + ACC101_BYTES_IN_WORD * template_idx;
+		value = ACC101_TMPL_PRI_1;
+		acc101_reg_write(d, address, value);
+		address = HWPfQmgrGrpTmplateReg2indx + ACC101_BYTES_IN_WORD * template_idx;
+		value = ACC101_TMPL_PRI_2;
+		acc101_reg_write(d, address, value);
+		address = HWPfQmgrGrpTmplateReg3Indx + ACC101_BYTES_IN_WORD * template_idx;
+		value = ACC101_TMPL_PRI_3;
+		acc101_reg_write(d, address, value);
+	}
+
+	address = HWPfQmgrGrpPriority;
+	value = ACC101_CFG_QMGR_HI_P;
+	acc101_reg_write(d, address, value);
+
+	/* Template Configuration */
+	for (template_idx = 0; template_idx < ACC101_NUM_TMPL;
+			template_idx++) {
+		value = 0;
+		address = HWPfQmgrGrpTmplateReg4Indx
+				+ ACC101_BYTES_IN_WORD * template_idx;
+		acc101_reg_write(d, address, value);
+	}
+	/* 4GUL */
+	int numQgs = conf->q_ul_4g.num_qgroups;
+	int numQqsAcc = 0;
+	value = 0;
+	for (qg_idx = numQqsAcc; qg_idx < (numQgs + numQqsAcc); qg_idx++)
+		value |= (1 << qg_idx);
+	for (template_idx = ACC101_SIG_UL_4G;
+			template_idx <= ACC101_SIG_UL_4G_LAST;
+			template_idx++) {
+		address = HWPfQmgrGrpTmplateReg4Indx
+				+ ACC101_BYTES_IN_WORD * template_idx;
+		acc101_reg_write(d, address, value);
+	}
+	/* 5GUL */
+	numQqsAcc += numQgs;
+	numQgs	= conf->q_ul_5g.num_qgroups;
+	value = 0;
+	int numEngines = 0;
+	for (qg_idx = numQqsAcc; qg_idx < (numQgs + numQqsAcc); qg_idx++)
+		value |= (1 << qg_idx);
+	for (template_idx = ACC101_SIG_UL_5G;
+			template_idx <= ACC101_SIG_UL_5G_LAST;
+			template_idx++) {
+		/* Check engine power-on status */
+		address = HwPfFecUl5gIbDebugReg +
+				ACC101_ENGINE_OFFSET * template_idx;
+		status = (acc101_reg_read(d, address) >> 4) & 0xF;
+		address = HWPfQmgrGrpTmplateReg4Indx
+				+ ACC101_BYTES_IN_WORD * template_idx;
+		if (status == 1) {
+			acc101_reg_write(d, address, value);
+			numEngines++;
+		} else
+			acc101_reg_write(d, address, 0);
+	}
+	/* 4GDL */
+	numQqsAcc += numQgs;
+	numQgs	= conf->q_dl_4g.num_qgroups;
+	value = 0;
+	for (qg_idx = numQqsAcc; qg_idx < (numQgs + numQqsAcc); qg_idx++)
+		value |= (1 << qg_idx);
+	for (template_idx = ACC101_SIG_DL_4G;
+			template_idx <= ACC101_SIG_DL_4G_LAST;
+			template_idx++) {
+		address = HWPfQmgrGrpTmplateReg4Indx
+				+ ACC101_BYTES_IN_WORD * template_idx;
+		acc101_reg_write(d, address, value);
+	}
+	/* 5GDL */
+	numQqsAcc += numQgs;
+	numQgs	= conf->q_dl_5g.num_qgroups;
+	value = 0;
+	for (qg_idx = numQqsAcc; qg_idx < (numQgs + numQqsAcc); qg_idx++)
+		value |= (1 << qg_idx);
+	for (template_idx = ACC101_SIG_DL_5G;
+			template_idx <= ACC101_SIG_DL_5G_LAST;
+			template_idx++) {
+		address = HWPfQmgrGrpTmplateReg4Indx
+				+ ACC101_BYTES_IN_WORD * template_idx;
+		acc101_reg_write(d, address, value);
+	}
+
+	/* Queue Group Function mapping */
+	int qman_func_id[8] = {0, 2, 1, 3, 4, 0, 0, 0};
+	address = HWPfQmgrGrpFunction0;
+	value = 0;
+	for (qg_idx = 0; qg_idx < 8; qg_idx++) {
+		acc = accFromQgid(qg_idx, conf);
+		value |= qman_func_id[acc]<<(qg_idx * 4);
+	}
+	acc101_reg_write(d, address, value);
+
+	/* Configuration of the Arbitration QGroup depth to 1 */
+	for (qg_idx = 0; qg_idx < totalQgs; qg_idx++) {
+		address = HWPfQmgrArbQDepthGrp +
+		ACC101_BYTES_IN_WORD * qg_idx;
+		value = 0;
+		acc101_reg_write(d, address, value);
+	}
+
+	/* Enabling AQueues through the Queue hierarchy*/
+	for (vf_idx = 0; vf_idx < ACC101_NUM_VFS; vf_idx++) {
+		for (qg_idx = 0; qg_idx < ACC101_NUM_QGRPS; qg_idx++) {
+			value = 0;
+			if (vf_idx < conf->num_vf_bundles &&
+					qg_idx < totalQgs)
+				value = (1 << aqNum(qg_idx, conf)) - 1;
+			address = HWPfQmgrAqEnableVf
+					+ vf_idx * ACC101_BYTES_IN_WORD;
+			value += (qg_idx << 16);
+			acc101_reg_write(d, address, value);
+		}
+	}
+
+	/* This pointer to ARAM (128kB) is shifted by 2 (4B per register) */
+	uint32_t aram_address = 0;
+	for (qg_idx = 0; qg_idx < totalQgs; qg_idx++) {
+		for (vf_idx = 0; vf_idx < conf->num_vf_bundles; vf_idx++) {
+			address = HWPfQmgrVfBaseAddr + vf_idx
+					* ACC101_BYTES_IN_WORD + qg_idx
+					* ACC101_BYTES_IN_WORD * 64;
+			value = aram_address;
+			acc101_reg_write(d, address, value);
+			/* Offset ARAM Address for next memory bank
+			 * - increment of 4B
+			 */
+			aram_address += aqNum(qg_idx, conf) *
+					(1 << aqDepth(qg_idx, conf));
+		}
+	}
+
+	if (aram_address > ACC101_WORDS_IN_ARAM_SIZE) {
+		rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+				aram_address, ACC101_WORDS_IN_ARAM_SIZE);
+		return -EINVAL;
+	}
+
+	/* ==== HI Configuration ==== */
+
+	/* No Info Ring/MSI by default */
+	acc101_reg_write(d, HWPfHiInfoRingIntWrEnRegPf, 0);
+	acc101_reg_write(d, HWPfHiInfoRingVf2pfLoWrEnReg, 0);
+	acc101_reg_write(d, HWPfHiCfgMsiIntWrEnRegPf, 0xFFFFFFFF);
+	acc101_reg_write(d, HWPfHiCfgMsiVf2pfLoWrEnReg, 0xFFFFFFFF);
+	/* Prevent Block on Transmit Error */
+	address = HWPfHiBlockTransmitOnErrorEn;
+	value = 0;
+	acc101_reg_write(d, address, value);
+	/* Prevents to drop MSI */
+	address = HWPfHiMsiDropEnableReg;
+	value = 0;
+	acc101_reg_write(d, address, value);
+	/* Set the PF Mode register */
+	address = HWPfHiPfMode;
+	value = (conf->pf_mode_en) ? ACC101_PF_VAL : 0;
+	acc101_reg_write(d, address, value);
+	/* Explicitly releasing AXI after PF Mode and 2 ms */
+	usleep(2000);
+	acc101_reg_write(d, HWPfDmaAxiControl, 1);
+
+	/* QoS overflow init */
+	value = 1;
+	address = HWPfQosmonAEvalOverflow0;
+	acc101_reg_write(d, address, value);
+	address = HWPfQosmonBEvalOverflow0;
+	acc101_reg_write(d, address, value);
+
+	/* HARQ DDR Configuration */
+	unsigned int ddrSizeInMb = ACC101_HARQ_DDR;
+	for (vf_idx = 0; vf_idx < conf->num_vf_bundles; vf_idx++) {
+		address = HWPfDmaVfDdrBaseRw + vf_idx
+				* 0x10;
+		value = ((vf_idx * (ddrSizeInMb / 64)) << 16) +
+				(ddrSizeInMb - 1);
+		acc101_reg_write(d, address, value);
+	}
+	usleep(ACC101_LONG_WAIT);
+
+	rte_bbdev_log_debug("PF ACC101 configuration complete for %s", dev_name);
+	return 0;
+}
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.h b/drivers/baseband/acc101/rte_acc101_pmd.h
index afa83a6..d050e26 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.h
+++ b/drivers/baseband/acc101/rte_acc101_pmd.h
@@ -31,6 +31,11 @@
 #define RTE_ACC101_PF_DEVICE_ID        (0x57c4)
 #define RTE_ACC101_VF_DEVICE_ID        (0x57c5)
 
+/* Define as 1 to use only a single FEC engine */
+#ifndef RTE_ACC101_SINGLE_FEC
+#define RTE_ACC101_SINGLE_FEC 0
+#endif
+
 /* Values used in filling in descriptors */
 #define ACC101_DMA_DESC_TYPE           2
 #define ACC101_DMA_CODE_BLK_MODE       0
@@ -79,10 +84,25 @@
 
 #define ACC101_GRP_ID_SHIFT    10 /* Queue Index Hierarchy */
 #define ACC101_VF_ID_SHIFT     4  /* Queue Index Hierarchy */
+#define ACC101_TMPL_PRI_0      0x03020100
+#define ACC101_TMPL_PRI_1      0x07060504
+#define ACC101_TMPL_PRI_2      0x0b0a0908
+#define ACC101_TMPL_PRI_3      0x0f0e0d0c
 #define ACC101_QUEUE_ENABLE    0x80000000  /* Bit to mark Queue as Enabled */
+#define ACC101_WORDS_IN_ARAM_SIZE (128 * 1024 / 4)
 #define ACC101_FDONE    0x80000000
 #define ACC101_SDONE    0x40000000
 
+#define ACC101_NUM_TMPL       32
+/* Mapping of signals for the available engines */
+#define ACC101_SIG_UL_5G      0
+#define ACC101_SIG_UL_5G_LAST 8
+#define ACC101_SIG_DL_5G      13
+#define ACC101_SIG_DL_5G_LAST 15
+#define ACC101_SIG_UL_4G      16
+#define ACC101_SIG_UL_4G_LAST 19
+#define ACC101_SIG_DL_4G      27
+#define ACC101_SIG_DL_4G_LAST 31
 #define ACC101_NUM_ACCS       5
 #define ACC101_ACCMAP_0       0
 #define ACC101_ACCMAP_1       2
@@ -132,7 +152,23 @@
 #define ACC101_K0_3_1 56 /* K0 fraction numerator for rv 3 and BG 1 */
 #define ACC101_K0_3_2 43 /* K0 fraction numerator for rv 3 and BG 2 */
 
+/* ACC101 Configuration */
+#define ACC101_CFG_DMA_ERROR    0x3D7
+#define ACC101_CFG_AXI_CACHE    0x11
+#define ACC101_CFG_QMGR_HI_P    0x0F0F
+#define ACC101_CFG_PCI_AXI      0xC003
+#define ACC101_CFG_PCI_BRIDGE   0x40006033
+#define ACC101_ENGINE_OFFSET    0x1000
+#define ACC101_ENGINES_MAX      9
 #define ACC101_LONG_WAIT        1000
+#define ACC101_GPEX_AXIMAP_NUM  17
+#define ACC101_CLOCK_GATING_EN  0x30000
+#define ACC101_DMA_INBOUND      0x104
+/* DDR Size per VF - 512MB by default
+ * Can be increased up to 4 GB with single PF/VF
+ */
+#define ACC101_HARQ_DDR         (512 * 1)
+#define ACC101_MS_IN_US         (1000)
 
 /* ACC101 DMA Descriptor triplet */
 struct acc101_dma_triplet {
diff --git a/drivers/baseband/acc101/version.map b/drivers/baseband/acc101/version.map
index c2e0723..bd3c4b5 100644
--- a/drivers/baseband/acc101/version.map
+++ b/drivers/baseband/acc101/version.map
@@ -1,3 +1,10 @@
 DPDK_22 {
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_acc101_configure;
+
+};
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device
  2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
                   ` (8 preceding siblings ...)
  2022-04-04 21:13 ` [PATCH v1 9/9] baseband/acc101: add device configure function Nicolas Chautru
@ 2022-04-05  6:48 ` Thomas Monjalon
  2022-04-05  6:57   ` David Marchand
  9 siblings, 1 reply; 12+ messages in thread
From: Thomas Monjalon @ 2022-04-05  6:48 UTC (permalink / raw)
  To: Nicolas Chautru
  Cc: dev, gakhil, trix, ray.kinsella, bruce.richardson,
	hemant.agrawal, mingshan.zhang, david.marchand, john.mcnamara

04/04/2022 23:13, Nicolas Chautru:
> This serie introduces the PMD for the new bbdev device ACC101
> (aka Mount Cirrus). This is a derivative from previous Mount Bryce
> which includes silicon improvement, bug fixes, capacity improvement
> for 5GNR and feature improvement.

Nack for adding this driver.
Afer comparing with A100 driver, it is clear that it can be merged
in the same driver.
Bug fixes and improvements are not justification to create a new driver.



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device
  2022-04-05  6:48 ` [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Thomas Monjalon
@ 2022-04-05  6:57   ` David Marchand
  0 siblings, 0 replies; 12+ messages in thread
From: David Marchand @ 2022-04-05  6:57 UTC (permalink / raw)
  To: Thomas Monjalon, Nicolas Chautru
  Cc: dev, Akhil Goyal, Tom Rix, Kinsella, Ray, Bruce Richardson,
	Hemant Agrawal, mingshan.zhang, Mcnamara, John

On Tue, Apr 5, 2022 at 8:48 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 04/04/2022 23:13, Nicolas Chautru:
> > This serie introduces the PMD for the new bbdev device ACC101
> > (aka Mount Cirrus). This is a derivative from previous Mount Bryce
> > which includes silicon improvement, bug fixes, capacity improvement
> > for 5GNR and feature improvement.
>
> Nack for adding this driver.
> Afer comparing with A100 driver, it is clear that it can be merged
> in the same driver.
> Bug fixes and improvements are not justification to create a new driver.

I was about to send a similar mail.
+1 on this nack.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-04-05  6:57 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 1/9] baseband/acc101: introduce PMD for ACC101 Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 2/9] baseband/acc101: add HW register definition Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 3/9] baseband/acc101: add info get function Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 4/9] baseband/acc101: add queue configuration Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 5/9] baseband/acc101: add LDPC processing Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 6/9] baseband/acc101: support HARQ loopback Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 7/9] baseband/acc101: support 4G processing Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 8/9] baseband/acc101: support MSI interrupt Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 9/9] baseband/acc101: add device configure function Nicolas Chautru
2022-04-05  6:48 ` [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Thomas Monjalon
2022-04-05  6:57   ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).