* [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver
@ 2023-04-28 10:31 jerinj
2023-05-02 14:18 ` Ferruh Yigit
2023-05-08 13:44 ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
0 siblings, 2 replies; 4+ messages in thread
From: jerinj @ 2023-04-28 10:31 UTC (permalink / raw)
To: dev, Thomas Monjalon, Shijith Thotton,
Srisivasubramanian Srinivasan, Anatoly Burakov
Cc: ferruh.yigit, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
The LiquidIO product line has been substituted with CN9K/CN10K
OCTEON product line smart NICs located at drivers/net/octeon_ep/.
DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
because of the absence of updates in the driver.
Due to the above reasons, the driver removed from DPDK 23.07.
Also removed deprecation notice entry for the removal in
doc/guides/rel_notes/deprecation.rst.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
MAINTAINERS | 8 -
doc/guides/nics/features/liquidio.ini | 29 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/liquidio.rst | 169 --
doc/guides/rel_notes/deprecation.rst | 7 -
doc/guides/rel_notes/release_23_07.rst | 9 +-
drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
drivers/net/liquidio/base/lio_mbox.c | 246 ---
drivers/net/liquidio/base/lio_mbox.h | 102 -
drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
drivers/net/liquidio/lio_ethdev.h | 179 --
drivers/net/liquidio/lio_logs.h | 58 -
drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
drivers/net/liquidio/lio_rxtx.h | 740 --------
drivers/net/liquidio/lio_struct.h | 661 -------
drivers/net/liquidio/meson.build | 16 -
drivers/net/meson.build | 1 -
20 files changed, 1 insertion(+), 7156 deletions(-)
delete mode 100644 doc/guides/nics/features/liquidio.ini
delete mode 100644 doc/guides/nics/liquidio.rst
delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
delete mode 100644 drivers/net/liquidio/lio_ethdev.c
delete mode 100644 drivers/net/liquidio/lio_ethdev.h
delete mode 100644 drivers/net/liquidio/lio_logs.h
delete mode 100644 drivers/net/liquidio/lio_rxtx.c
delete mode 100644 drivers/net/liquidio/lio_rxtx.h
delete mode 100644 drivers/net/liquidio/lio_struct.h
delete mode 100644 drivers/net/liquidio/meson.build
diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e5099..0157c26dd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -681,14 +681,6 @@ F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
F: doc/guides/nics/features/thunderx.ini
-Cavium LiquidIO - UNMAINTAINED
-M: Shijith Thotton <sthotton@marvell.com>
-M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/liquidio/
-F: doc/guides/nics/liquidio.rst
-F: doc/guides/nics/features/liquidio.ini
-
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
deleted file mode 100644
index a8bde282e0..0000000000
--- a/doc/guides/nics/features/liquidio.ini
+++ /dev/null
@@ -1,29 +0,0 @@
-;
-; Supported features of the 'LiquidIO' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Link status = Y
-Link status event = Y
-MTU update = Y
-Scattered Rx = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-VLAN filter = Y
-CRC offload = Y
-VLAN offload = P
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Basic stats = Y
-Extended stats = Y
-Multiprocess aware = Y
-Linux = Y
-x86-64 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 5c9d1edf5e..31296822e5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -44,7 +44,6 @@ Network Interface Controller Drivers
ipn3ke
ixgbe
kni
- liquidio
mana
memif
mlx4
diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
deleted file mode 100644
index f893b3b539..0000000000
--- a/doc/guides/nics/liquidio.rst
+++ /dev/null
@@ -1,169 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Cavium, Inc
-
-LiquidIO VF Poll Mode Driver
-============================
-
-The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
-Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
-done using kernel driver.
-
-More information can be found at `Cavium Official Website
-<http://cavium.com/LiquidIO_Adapters.html>`_.
-
-Supported LiquidIO Adapters
------------------------------
-
-- LiquidIO II CN2350 210SV/225SV
-- LiquidIO II CN2350 210SVPT
-- LiquidIO II CN2360 210SV/225SV
-- LiquidIO II CN2360 210SVPT
-
-
-SR-IOV: Prerequisites and Sample Application Notes
---------------------------------------------------
-
-This section provides instructions to configure SR-IOV with Linux OS.
-
-#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
-
- .. code-block:: console
-
- lspci -s <slot> -vvv
-
- Example output:
-
- .. code-block:: console
-
- [...]
- Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
- [...]
- Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
- [...]
- Kernel driver in use: LiquidIO
-
-#. Load the kernel module:
-
- .. code-block:: console
-
- modprobe liquidio
-
-#. Bring up the PF ports:
-
- .. code-block:: console
-
- ifconfig p4p1 up
- ifconfig p4p2 up
-
-#. Change PF MTU if required:
-
- .. code-block:: console
-
- ifconfig p4p1 mtu 9000
- ifconfig p4p2 mtu 9000
-
-#. Create VF device(s):
-
- Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
- of the parent PF.
-
- .. code-block:: console
-
- echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
- echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
-
-#. Assign VF MAC address:
-
- Assign MAC address to the VF using iproute2 utility. The syntax is::
-
- ip link set <PF iface> vf <VF id> mac <macaddr>
-
- Example output:
-
- .. code-block:: console
-
- ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
-
-#. Assign VF(s) to VM.
-
- The VF devices may be passed through to the guest VM using qemu or
- virt-manager or virsh etc.
-
- Example qemu guest launch command:
-
- .. code-block:: console
-
- ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
- -cpu host -m 4096 -smp 4 \
- -drive file=<disk_file>,if=none,id=disk1,format=<type> \
- -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
- -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
-
-#. Running testpmd
-
- Refer to the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
- ``testpmd`` application.
-
- .. note::
-
- Use ``igb_uio`` instead of ``vfio-pci`` in VM.
-
- Example output:
-
- .. code-block:: console
-
- [...]
- EAL: PCI device 0000:03:00.3 on NUMA socket 0
- EAL: probe driver: 177d:9712 net_liovf
- EAL: using IOMMU type 1 (Type 1)
- PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
- EAL: PCI device 0000:03:08.3 on NUMA socket 0
- EAL: probe driver: 177d:9712 net_liovf
- PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
- Interactive-mode selected
- USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
- Configuring Port 0 (socket 0)
- PMD: net_liovf[03:00.3]INFO: Starting port 0
- Port 0: F2:A8:1B:5E:B4:66
- Configuring Port 1 (socket 0)
- PMD: net_liovf[03:08.3]INFO: Starting port 1
- Port 1: 32:76:CC:EE:56:D7
- Checking link statuses...
- Port 0 Link Up - speed 10000 Mbps - full-duplex
- Port 1 Link Up - speed 10000 Mbps - full-duplex
- Done
- testpmd>
-
-#. Enabling VF promiscuous mode
-
- One VF per PF can be marked as trusted for promiscuous mode.
-
- .. code-block:: console
-
- ip link set dev <PF iface> vf <VF id> trust on
-
-
-Limitations
------------
-
-VF MTU
-~~~~~~
-
-VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
-
-VLAN offload
-~~~~~~~~~~~~
-
-Tx VLAN insertion is not supported and consequently VLAN offload feature is
-marked partial.
-
-Ring size
-~~~~~~~~~
-
-Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..8e1cdd677a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -121,13 +121,6 @@ Deprecation Notices
* net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
This decision has been made to alleviate the burden of maintaining a discontinued product.
-* net/liquidio: Remove LiquidIO ethdev driver.
- The LiquidIO product line has been substituted
- with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
- DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
- because of the absence of updates in the driver.
- Due to the above reasons, the driver will be unavailable from DPDK 23.07.
-
* cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
to have another parameter ``qp_id`` to return the queue pair ID
which got error interrupt to the application,
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..4d505b607a 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -59,14 +59,7 @@ New Features
Removed Items
-------------
-.. This section should contain removed items in this release. Sample format:
-
- * Add a short 1-2 sentence description of the removed item
- in the past tense.
-
- This section is a comment. Do not overwrite or remove it.
- Also, make sure to start the actual text at the margin.
- =======================================================
+* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
API Changes
diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
deleted file mode 100644
index 9f28504b53..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_reg.h
+++ /dev/null
@@ -1,165 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_REG_H_
-#define _LIO_23XX_REG_H_
-
-/* ###################### REQUEST QUEUE ######################### */
-
-/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
-#define CN23XX_SLI_PKT_INSTR_BADDR_START64 0x10010
-
-/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
-#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START 0x10020
-
-/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START 0x10030
-
-/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
-#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64 0x10040
-
-/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
- * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
- */
-#define CN23XX_SLI_PKT_INPUT_CONTROL_START64 0x10000
-
-/* ------- Request Queue Macros --------- */
-
-/* Each Input Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_IQ_OFFSET 0x20000
-
-#define CN23XX_SLI_IQ_PKT_CONTROL64(iq) \
- (CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_BASE_ADDR64(iq) \
- (CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_SIZE(iq) \
- (CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_DOORBELL(iq) \
- (CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_INSTR_COUNT64(iq) \
- (CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-/* Number of instructions to be read in one MAC read request.
- * setting to Max value(4)
- */
-#define CN23XX_PKT_INPUT_CTL_RDSIZE (3 << 25)
-#define CN23XX_PKT_INPUT_CTL_IS_64B (1 << 24)
-#define CN23XX_PKT_INPUT_CTL_RST (1 << 23)
-#define CN23XX_PKT_INPUT_CTL_QUIET (1 << 28)
-#define CN23XX_PKT_INPUT_CTL_RING_ENB (1 << 22)
-#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP (1 << 6)
-#define CN23XX_PKT_INPUT_CTL_USE_CSR (1 << 4)
-#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP (2)
-
-/* These bits[47:44] select the Physical function number within the MAC */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS 45
-/* These bits[43:32] select the function number within the PF */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS 32
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK \
- (CN23XX_PKT_INPUT_CTL_RDSIZE | \
- CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
- CN23XX_PKT_INPUT_CTL_USE_CSR)
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK \
- (CN23XX_PKT_INPUT_CTL_RDSIZE | \
- CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
- CN23XX_PKT_INPUT_CTL_USE_CSR | \
- CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
-#endif
-
-/* ############################ OUTPUT QUEUE ######################### */
-
-/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
-#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START 0x10050
-
-/* 64 registers for Output queue buffer and info size
- * SLI_PKT(0..63)_OUT_SIZE
- */
-#define CN23XX_SLI_PKT_OUT_SIZE 0x10060
-
-/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
-#define CN23XX_SLI_SLIST_BADDR_START64 0x10070
-
-/* 64 registers for Output Queue Packet Credits
- * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
- */
-#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START 0x10080
-
-/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START 0x10090
-
-/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
-#define CN23XX_SLI_PKT_CNTS_START 0x100B0
-
-/* Each Output Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_OQ_OFFSET 0x20000
-
-/* ------- Output Queue Macros --------- */
-
-#define CN23XX_SLI_OQ_PKT_CONTROL(oq) \
- (CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BASE_ADDR64(oq) \
- (CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_SIZE(oq) \
- (CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq) \
- (CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_SENT(oq) \
- (CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_CREDIT(oq) \
- (CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-/* ------------------ Masks ---------------- */
-#define CN23XX_PKT_OUTPUT_CTL_IPTR (1 << 11)
-#define CN23XX_PKT_OUTPUT_CTL_ES (1 << 9)
-#define CN23XX_PKT_OUTPUT_CTL_NSR (1 << 8)
-#define CN23XX_PKT_OUTPUT_CTL_ROR (1 << 7)
-#define CN23XX_PKT_OUTPUT_CTL_DPTR (1 << 6)
-#define CN23XX_PKT_OUTPUT_CTL_BMODE (1 << 5)
-#define CN23XX_PKT_OUTPUT_CTL_ES_P (1 << 3)
-#define CN23XX_PKT_OUTPUT_CTL_NSR_P (1 << 2)
-#define CN23XX_PKT_OUTPUT_CTL_ROR_P (1 << 1)
-#define CN23XX_PKT_OUTPUT_CTL_RING_ENB (1 << 0)
-
-/* Rings per Virtual Function [RO] */
-#define CN23XX_PKT_INPUT_CTL_RPVF_MASK 0x3F
-#define CN23XX_PKT_INPUT_CTL_RPVF_POS 48
-
-/* These bits[47:44][RO] give the Physical function
- * number info within the MAC
- */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK 0x7
-
-/* These bits[43:32][RO] give the virtual function
- * number info within the PF
- */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK 0x1FFF
-
-/* ######################### Mailbox Reg Macros ######################## */
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START 0x10200
-#define CN23XX_VF_SLI_PKT_MBOX_INT_START 0x10210
-
-#define CN23XX_SLI_MBOX_OFFSET 0x20000
-#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET 0x8
-
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx) \
- (CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START + \
- ((q) * CN23XX_SLI_MBOX_OFFSET + \
- (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
-
-#define CN23XX_VF_SLI_PKT_MBOX_INT(q) \
- (CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
-
-#endif /* _LIO_23XX_REG_H_ */
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
deleted file mode 100644
index c6b8310b71..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.c
+++ /dev/null
@@ -1,513 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <string.h>
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_23xx_reg.h"
-#include "lio_mbox.h"
-
-static int
-cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
-{
- uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
- uint64_t d64, q_no;
- int ret_val = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < num_queues; q_no++) {
- /* set RST bit to 1. This bit applies to both IQ and OQ */
- d64 = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- d64);
- }
-
- /* wait until the RST bit is clear or the RST and QUIET bits are set */
- for (q_no = 0; q_no < num_queues; q_no++) {
- volatile uint64_t reg_val;
-
- reg_val = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
- !(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
- loop) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- loop = loop - 1;
- }
-
- if (loop == 0) {
- lio_dev_err(lio_dev,
- "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
- (unsigned long)q_no);
- return -1;
- }
-
- reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
-
- reg_val = lio_read_csr64(
- lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
- lio_dev_err(lio_dev,
- "clearing the reset failed for qno: %lu\n",
- (unsigned long)q_no);
- ret_val = -1;
- }
- }
-
- return ret_val;
-}
-
-static int
-cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
-{
- uint64_t q_no;
- uint64_t d64;
-
- PMD_INIT_FUNC_TRACE();
-
- if (cn23xx_vf_reset_io_queues(lio_dev,
- lio_dev->sriov_info.rings_per_vf))
- return -1;
-
- for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
- 0xFFFFFFFF);
-
- d64 = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
-
- d64 &= 0xEFFFFFFFFFFFFFFFL;
-
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
- d64);
-
- /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
- * the Input Queues
- */
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- CN23XX_PKT_INPUT_CTL_MASK);
- }
-
- return 0;
-}
-
-static void
-cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
-{
- uint32_t reg_val;
- uint32_t q_no;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
- 0xFFFFFFFF);
-
- reg_val =
- lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
-
- reg_val &= 0xEFFFFFFFFFFFFFFFL;
-
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
-
- reg_val =
- lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
-
- /* set IPTR & DPTR */
- reg_val |=
- (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
-
- /* reset BMODE */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
-
- /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
- * for Output Queue Scatter List
- * reset ROR_P, NSR_P
- */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
-#endif
- /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
- * for Output Queue Data
- * reset ROR, NSR
- */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
- /* set the ES bit */
- reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
-
- /* write all the selected settings */
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
- reg_val);
- }
-}
-
-static int
-cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
-{
- PMD_INIT_FUNC_TRACE();
-
- if (cn23xx_vf_setup_global_input_regs(lio_dev))
- return -1;
-
- cn23xx_vf_setup_global_output_regs(lio_dev);
-
- return 0;
-}
-
-static void
-cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- uint64_t pkt_in_done = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Write the start of the input queue's ring and its size */
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
- iq->base_addr_dma);
- lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
-
- /* Remember the doorbell & instruction count register addr
- * for this queue
- */
- iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_IQ_DOORBELL(iq_no);
- iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
- lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
- iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
-
- /* Store the current instruction counter (used in flush_iq
- * calculation)
- */
- pkt_in_done = rte_read64(iq->inst_cnt_reg);
-
- /* Clear the count by writing back what we read, but don't
- * enable data traffic here
- */
- rte_write64(pkt_in_done, iq->inst_cnt_reg);
-}
-
-static void
-cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
-{
- struct lio_droq *droq = lio_dev->droq[oq_no];
-
- PMD_INIT_FUNC_TRACE();
-
- lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
- droq->desc_ring_dma);
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
-
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
- (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
-
- /* Get the mapped address of the pkt_sent and pkts_credit regs */
- droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_OQ_PKTS_SENT(oq_no);
- droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
-}
-
-static void
-cn23xx_vf_free_mbox(struct lio_device *lio_dev)
-{
- PMD_INIT_FUNC_TRACE();
-
- rte_free(lio_dev->mbox[0]);
- lio_dev->mbox[0] = NULL;
-
- rte_free(lio_dev->mbox);
- lio_dev->mbox = NULL;
-}
-
-static int
-cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
-{
- struct lio_mbox *mbox;
-
- PMD_INIT_FUNC_TRACE();
-
- if (lio_dev->mbox == NULL) {
- lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
- if (lio_dev->mbox == NULL)
- return -ENOMEM;
- }
-
- mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
- if (mbox == NULL) {
- rte_free(lio_dev->mbox);
- lio_dev->mbox = NULL;
- return -ENOMEM;
- }
-
- rte_spinlock_init(&mbox->lock);
-
- mbox->lio_dev = lio_dev;
-
- mbox->q_no = 0;
-
- mbox->state = LIO_MBOX_STATE_IDLE;
-
- /* VF mbox interrupt reg */
- mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_VF_SLI_PKT_MBOX_INT(0);
- /* VF reads from SIG0 reg */
- mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
- /* VF writes into SIG1 reg */
- mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
-
- lio_dev->mbox[0] = mbox;
-
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
- return 0;
-}
-
-static int
-cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
-{
- uint32_t q_no;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
- uint64_t reg_val;
-
- /* set the corresponding IQ IS_64B bit */
- if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
- lio_write_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
- }
-
- /* set the corresponding IQ ENB bit */
- if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
- lio_write_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
- }
- }
- for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
- uint32_t reg_val;
-
- /* set the corresponding OQ ENB bit */
- if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
- reg_val = lio_read_csr(
- lio_dev,
- CN23XX_SLI_OQ_PKT_CONTROL(q_no));
- reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
- lio_write_csr(lio_dev,
- CN23XX_SLI_OQ_PKT_CONTROL(q_no),
- reg_val);
- }
- }
-
- return 0;
-}
-
-static void
-cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
-{
- uint32_t num_queues;
-
- PMD_INIT_FUNC_TRACE();
-
- /* per HRM, rings can only be disabled via reset operation,
- * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
- */
- num_queues = lio_dev->num_iqs;
- if (num_queues < lio_dev->num_oqs)
- num_queues = lio_dev->num_oqs;
-
- cn23xx_vf_reset_io_queues(lio_dev, num_queues);
-}
-
-void
-cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
-{
- struct lio_mbox_cmd mbox_cmd;
-
- memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
- mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
- mbox_cmd.msg.s.resp_needed = 0;
- mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
- mbox_cmd.msg.s.len = 1;
- mbox_cmd.q_no = 0;
- mbox_cmd.recv_len = 0;
- mbox_cmd.recv_status = 0;
- mbox_cmd.fn = NULL;
- mbox_cmd.fn_arg = 0;
-
- lio_mbox_write(lio_dev, &mbox_cmd);
-}
-
-static void
-cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
- struct lio_mbox_cmd *cmd, void *arg)
-{
- uint32_t major = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
- if (cmd->recv_len > 1) {
- struct lio_version *lio_ver = (struct lio_version *)cmd->data;
-
- major = lio_ver->major;
- major = major << 16;
- }
-
- rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
-}
-
-int
-cn23xx_pfvf_handshake(struct lio_device *lio_dev)
-{
- struct lio_mbox_cmd mbox_cmd;
- struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
- uint32_t q_no, count = 0;
- rte_atomic64_t status;
- uint32_t pfmajor;
- uint32_t vfmajor;
- uint32_t ret;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Sending VF_ACTIVE indication to the PF driver */
- lio_dev_dbg(lio_dev, "requesting info from PF\n");
-
- mbox_cmd.msg.mbox_msg64 = 0;
- mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
- mbox_cmd.msg.s.resp_needed = 1;
- mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
- mbox_cmd.msg.s.len = 2;
- mbox_cmd.data[0] = 0;
- lio_ver->major = LIO_BASE_MAJOR_VERSION;
- lio_ver->minor = LIO_BASE_MINOR_VERSION;
- lio_ver->micro = LIO_BASE_MICRO_VERSION;
- mbox_cmd.q_no = 0;
- mbox_cmd.recv_len = 0;
- mbox_cmd.recv_status = 0;
- mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
- mbox_cmd.fn_arg = (void *)&status;
-
- if (lio_mbox_write(lio_dev, &mbox_cmd)) {
- lio_dev_err(lio_dev, "Write to mailbox failed\n");
- return -1;
- }
-
- rte_atomic64_set(&status, 0);
-
- do {
- rte_delay_ms(1);
- } while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
-
- ret = rte_atomic64_read(&status);
- if (ret == 0) {
- lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
- return -1;
- }
-
- for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
- lio_dev->instr_queue[q_no]->txpciq.s.pkind =
- lio_dev->pfvf_hsword.pkind;
-
- vfmajor = LIO_BASE_MAJOR_VERSION;
- pfmajor = ret >> 16;
- if (pfmajor != vfmajor) {
- lio_dev_err(lio_dev,
- "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
- vfmajor, pfmajor);
- ret = -EPERM;
- } else {
- lio_dev_dbg(lio_dev,
- "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
- vfmajor, pfmajor);
- ret = 0;
- }
-
- lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
- lio_dev->pfvf_hsword.pkind);
-
- return ret;
-}
-
-void
-cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
-{
- uint64_t mbox_int_val;
-
- /* read and clear by writing 1 */
- mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
- rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
- if (lio_mbox_read(lio_dev->mbox[0]))
- lio_mbox_process_message(lio_dev->mbox[0]);
-}
-
-int
-cn23xx_vf_setup_device(struct lio_device *lio_dev)
-{
- uint64_t reg_val;
-
- PMD_INIT_FUNC_TRACE();
-
- /* INPUT_CONTROL[RPVF] gives the VF IOq count */
- reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
-
- lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
- CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
- lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
- CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
-
- reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
-
- lio_dev->sriov_info.rings_per_vf =
- reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
-
- lio_dev->default_config = lio_get_conf(lio_dev);
- if (lio_dev->default_config == NULL)
- return -1;
-
- lio_dev->fn_list.setup_iq_regs = cn23xx_vf_setup_iq_regs;
- lio_dev->fn_list.setup_oq_regs = cn23xx_vf_setup_oq_regs;
- lio_dev->fn_list.setup_mbox = cn23xx_vf_setup_mbox;
- lio_dev->fn_list.free_mbox = cn23xx_vf_free_mbox;
-
- lio_dev->fn_list.setup_device_regs = cn23xx_vf_setup_device_regs;
-
- lio_dev->fn_list.enable_io_queues = cn23xx_vf_enable_io_queues;
- lio_dev->fn_list.disable_io_queues = cn23xx_vf_disable_io_queues;
-
- return 0;
-}
-
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
deleted file mode 100644
index 8e5362db15..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_VF_H_
-#define _LIO_23XX_VF_H_
-
-#include <stdio.h>
-
-#include "lio_struct.h"
-
-static const struct lio_config default_cn23xx_conf = {
- .card_type = LIO_23XX,
- .card_name = LIO_23XX_NAME,
- /** IQ attributes */
- .iq = {
- .max_iqs = CN23XX_CFG_IO_QUEUES,
- .pending_list_size =
- (CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
- .instr_type = OCTEON_64BYTE_INSTR,
- },
-
- /** OQ attributes */
- .oq = {
- .max_oqs = CN23XX_CFG_IO_QUEUES,
- .info_ptr = OCTEON_OQ_INFOPTR_MODE,
- .refill_threshold = CN23XX_OQ_REFIL_THRESHOLD,
- },
-
- .num_nic_ports = CN23XX_DEFAULT_NUM_PORTS,
- .num_def_rx_descs = CN23XX_MAX_OQ_DESCRIPTORS,
- .num_def_tx_descs = CN23XX_MAX_IQ_DESCRIPTORS,
- .def_rx_buf_size = CN23XX_OQ_BUF_SIZE,
-};
-
-static inline const struct lio_config *
-lio_get_conf(struct lio_device *lio_dev)
-{
- const struct lio_config *default_lio_conf = NULL;
-
- /* check the LIO Device model & return the corresponding lio
- * configuration
- */
- default_lio_conf = &default_cn23xx_conf;
-
- if (default_lio_conf == NULL) {
- lio_dev_err(lio_dev, "Configuration verification failed\n");
- return NULL;
- }
-
- return default_lio_conf;
-}
-
-#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT 100000
-
-void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
-
-int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
-
-int cn23xx_vf_setup_device(struct lio_device *lio_dev);
-
-void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
-#endif /* _LIO_23XX_VF_H_ */
diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
deleted file mode 100644
index 5e119c1241..0000000000
--- a/drivers/net/liquidio/base/lio_hw_defs.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_HW_DEFS_H_
-#define _LIO_HW_DEFS_H_
-
-#include <rte_io.h>
-
-#ifndef PCI_VENDOR_ID_CAVIUM
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#endif
-
-#define LIO_CN23XX_VF_VID 0x9712
-
-/* CN23xx subsystem device ids */
-#define PCI_SUBSYS_DEV_ID_CN2350_210 0x0004
-#define PCI_SUBSYS_DEV_ID_CN2360_210 0x0005
-#define PCI_SUBSYS_DEV_ID_CN2360_225 0x0006
-#define PCI_SUBSYS_DEV_ID_CN2350_225 0x0007
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3 0x0008
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3 0x0009
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT 0x000a
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT 0x000b
-
-/* --------------------------CONFIG VALUES------------------------ */
-
-/* CN23xx IQ configuration macros */
-#define CN23XX_MAX_RINGS_PER_PF 64
-#define CN23XX_MAX_RINGS_PER_VF 8
-
-#define CN23XX_MAX_INPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_IQ_DESCRIPTORS 512
-#define CN23XX_MIN_IQ_DESCRIPTORS 128
-
-#define CN23XX_MAX_OUTPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_OQ_DESCRIPTORS 512
-#define CN23XX_MIN_OQ_DESCRIPTORS 128
-#define CN23XX_OQ_BUF_SIZE 1536
-
-#define CN23XX_OQ_REFIL_THRESHOLD 16
-
-#define CN23XX_DEFAULT_NUM_PORTS 1
-
-#define CN23XX_CFG_IO_QUEUES CN23XX_MAX_RINGS_PER_PF
-
-/* common OCTEON configuration macros */
-#define OCTEON_64BYTE_INSTR 64
-#define OCTEON_OQ_INFOPTR_MODE 1
-
-/* Max IOQs per LIO Link */
-#define LIO_MAX_IOQS_PER_IF 64
-
-/* Wait time in milliseconds for FLR */
-#define LIO_PCI_FLR_WAIT 100
-
-enum lio_card_type {
- LIO_23XX /* 23xx */
-};
-
-#define LIO_23XX_NAME "23xx"
-
-#define LIO_DEV_RUNNING 0xc
-
-#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg) \
- ((cfg)->default_config->oq.refill_threshold)
-#define LIO_NUM_DEF_TX_DESCS_CFG(cfg) \
- ((cfg)->default_config->num_def_tx_descs)
-
-#define LIO_IQ_INSTR_TYPE(cfg) ((cfg)->default_config->iq.instr_type)
-
-/* The following config values are fixed and should not be modified. */
-
-/* Maximum number of Instruction queues */
-#define LIO_MAX_INSTR_QUEUES(lio_dev) CN23XX_MAX_RINGS_PER_VF
-
-#define LIO_MAX_POSSIBLE_INSTR_QUEUES CN23XX_MAX_INPUT_QUEUES
-#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES CN23XX_MAX_OUTPUT_QUEUES
-
-#define LIO_DEVICE_NAME_LEN 32
-#define LIO_BASE_MAJOR_VERSION 1
-#define LIO_BASE_MINOR_VERSION 5
-#define LIO_BASE_MICRO_VERSION 1
-
-#define LIO_FW_VERSION_LENGTH 32
-
-#define LIO_Q_RECONF_MIN_VERSION "1.7.0"
-#define LIO_VF_TRUST_MIN_VERSION "1.7.1"
-
-/** Tag types used by Octeon cores in its work. */
-enum octeon_tag_type {
- OCTEON_ORDERED_TAG = 0,
- OCTEON_ATOMIC_TAG = 1,
-};
-
-/* pre-defined host->NIC tag values */
-#define LIO_CONTROL (0x11111110)
-#define LIO_DATA(i) (0x11111111 + (i))
-
-/* used for NIC operations */
-#define LIO_OPCODE 1
-
-/* Subcodes are used by host driver/apps to identify the sub-operation
- * for the core. They only need to by unique for a given subsystem.
- */
-#define LIO_OPCODE_SUBCODE(op, sub) \
- ((((op) & 0x0f) << 8) | ((sub) & 0x7f))
-
-/** LIO_OPCODE subcodes */
-/* This subcode is sent by core PCI driver to indicate cores are ready. */
-#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */
-#define LIO_OPCODE_CMD 0x03
-#define LIO_OPCODE_INFO 0x04
-#define LIO_OPCODE_PORT_STATS 0x05
-#define LIO_OPCODE_IF_CFG 0x09
-
-#define LIO_MIN_RX_BUF_SIZE 64
-#define LIO_MAX_RX_PKTLEN (64 * 1024)
-
-/* NIC Command types */
-#define LIO_CMD_CHANGE_MTU 0x1
-#define LIO_CMD_CHANGE_DEVFLAGS 0x3
-#define LIO_CMD_RX_CTL 0x4
-#define LIO_CMD_CLEAR_STATS 0x6
-#define LIO_CMD_SET_RSS 0xD
-#define LIO_CMD_TNL_RX_CSUM_CTL 0x10
-#define LIO_CMD_TNL_TX_CSUM_CTL 0x11
-#define LIO_CMD_ADD_VLAN_FILTER 0x17
-#define LIO_CMD_DEL_VLAN_FILTER 0x18
-#define LIO_CMD_VXLAN_PORT_CONFIG 0x19
-#define LIO_CMD_QUEUE_COUNT_CTL 0x1f
-
-#define LIO_CMD_VXLAN_PORT_ADD 0x0
-#define LIO_CMD_VXLAN_PORT_DEL 0x1
-#define LIO_CMD_RXCSUM_ENABLE 0x0
-#define LIO_CMD_TXCSUM_ENABLE 0x0
-
-/* RX(packets coming from wire) Checksum verification flags */
-/* TCP/UDP csum */
-#define LIO_L4_CSUM_VERIFIED 0x1
-#define LIO_IP_CSUM_VERIFIED 0x2
-
-/* RSS */
-#define LIO_RSS_PARAM_DISABLE_RSS 0x10
-#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED 0x08
-#define LIO_RSS_PARAM_ITABLE_UNCHANGED 0x04
-#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED 0x02
-
-#define LIO_RSS_HASH_IPV4 0x100
-#define LIO_RSS_HASH_TCP_IPV4 0x200
-#define LIO_RSS_HASH_IPV6 0x400
-#define LIO_RSS_HASH_TCP_IPV6 0x1000
-#define LIO_RSS_HASH_IPV6_EX 0x800
-#define LIO_RSS_HASH_TCP_IPV6_EX 0x2000
-
-#define LIO_RSS_OFFLOAD_ALL ( \
- LIO_RSS_HASH_IPV4 | \
- LIO_RSS_HASH_TCP_IPV4 | \
- LIO_RSS_HASH_IPV6 | \
- LIO_RSS_HASH_TCP_IPV6 | \
- LIO_RSS_HASH_IPV6_EX | \
- LIO_RSS_HASH_TCP_IPV6_EX)
-
-#define LIO_RSS_MAX_TABLE_SZ 128
-#define LIO_RSS_MAX_KEY_SZ 40
-#define LIO_RSS_PARAM_SIZE 16
-
-/* Interface flags communicated between host driver and core app. */
-enum lio_ifflags {
- LIO_IFFLAG_PROMISC = 0x01,
- LIO_IFFLAG_ALLMULTI = 0x02,
- LIO_IFFLAG_UNICAST = 0x10
-};
-
-/* Routines for reading and writing CSRs */
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define lio_write_csr(lio_dev, reg_off, value) \
- do { \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- typeof(value) _value = value; \
- PMD_REGS_LOG(_dev, \
- "Write32: Reg: 0x%08lx Val: 0x%08lx\n", \
- (unsigned long)_reg_off, \
- (unsigned long)_value); \
- rte_write32(_value, _dev->hw_addr + _reg_off); \
- } while (0)
-
-#define lio_write_csr64(lio_dev, reg_off, val64) \
- do { \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- typeof(val64) _val64 = val64; \
- PMD_REGS_LOG( \
- _dev, \
- "Write64: Reg: 0x%08lx Val: 0x%016llx\n", \
- (unsigned long)_reg_off, \
- (unsigned long long)_val64); \
- rte_write64(_val64, _dev->hw_addr + _reg_off); \
- } while (0)
-
-#define lio_read_csr(lio_dev, reg_off) \
- ({ \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- uint32_t val = rte_read32(_dev->hw_addr + _reg_off); \
- PMD_REGS_LOG(_dev, \
- "Read32: Reg: 0x%08lx Val: 0x%08lx\n", \
- (unsigned long)_reg_off, \
- (unsigned long)val); \
- val; \
- })
-
-#define lio_read_csr64(lio_dev, reg_off) \
- ({ \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off); \
- PMD_REGS_LOG( \
- _dev, \
- "Read64: Reg: 0x%08lx Val: 0x%016llx\n", \
- (unsigned long)_reg_off, \
- (unsigned long long)val64); \
- val64; \
- })
-#else
-#define lio_write_csr(lio_dev, reg_off, value) \
- rte_write32(value, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_write_csr64(lio_dev, reg_off, val64) \
- rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr(lio_dev, reg_off) \
- rte_read32((lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr64(lio_dev, reg_off) \
- rte_read64((lio_dev)->hw_addr + (reg_off))
-#endif
-#endif /* _LIO_HW_DEFS_H_ */
diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
deleted file mode 100644
index 2ac2b1b334..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.c
+++ /dev/null
@@ -1,246 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_mbox.h"
-
-/**
- * lio_mbox_read:
- * @mbox: Pointer mailbox
- *
- * Reads the 8-bytes of data from the mbox register
- * Writes back the acknowledgment indicating completion of read
- */
-int
-lio_mbox_read(struct lio_mbox *mbox)
-{
- union lio_mbox_message msg;
- int ret = 0;
-
- msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
-
- if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
- return 0;
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
- mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
- msg.mbox_msg64;
- mbox->mbox_req.recv_len++;
- } else {
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
- mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
- msg.mbox_msg64;
- mbox->mbox_resp.recv_len++;
- } else {
- if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
- (msg.s.type == LIO_MBOX_REQUEST)) {
- mbox->state &= ~LIO_MBOX_STATE_IDLE;
- mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
- mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
- mbox->mbox_req.q_no = mbox->q_no;
- mbox->mbox_req.recv_len = 1;
- } else {
- if ((mbox->state &
- LIO_MBOX_STATE_RES_PENDING) &&
- (msg.s.type == LIO_MBOX_RESPONSE)) {
- mbox->state &=
- ~LIO_MBOX_STATE_RES_PENDING;
- mbox->state |=
- LIO_MBOX_STATE_RES_RECEIVING;
- mbox->mbox_resp.msg.mbox_msg64 =
- msg.mbox_msg64;
- mbox->mbox_resp.q_no = mbox->q_no;
- mbox->mbox_resp.recv_len = 1;
- } else {
- rte_write64(LIO_PFVFERR,
- mbox->mbox_read_reg);
- mbox->state |= LIO_MBOX_STATE_ERROR;
- return -1;
- }
- }
- }
- }
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
- if (mbox->mbox_req.recv_len < msg.s.len) {
- ret = 0;
- } else {
- mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
- mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
- ret = 1;
- }
- } else {
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
- if (mbox->mbox_resp.recv_len < msg.s.len) {
- ret = 0;
- } else {
- mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
- mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
- ret = 1;
- }
- } else {
- RTE_ASSERT(0);
- }
- }
-
- rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
-
- return ret;
-}
-
-/**
- * lio_mbox_write:
- * @lio_dev: Pointer lio device
- * @mbox_cmd: Cmd to send to mailbox.
- *
- * Populates the queue specific mbox structure
- * with cmd information.
- * Write the cmd to mbox register
- */
-int
-lio_mbox_write(struct lio_device *lio_dev,
- struct lio_mbox_cmd *mbox_cmd)
-{
- struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
- uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
-
- if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
- !(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
- return LIO_MBOX_STATUS_FAILED;
-
- if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
- !(mbox->state & LIO_MBOX_STATE_IDLE))
- return LIO_MBOX_STATUS_BUSY;
-
- if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
- rte_memcpy(&mbox->mbox_resp, mbox_cmd,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_RES_PENDING;
- }
-
- count = 0;
-
- while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
- rte_delay_ms(1);
- if (count++ == 1000) {
- ret = LIO_MBOX_STATUS_FAILED;
- break;
- }
- }
-
- if (ret == LIO_MBOX_STATUS_SUCCESS) {
- rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
- for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
- count = 0;
- while (rte_read64(mbox->mbox_write_reg) !=
- LIO_PFVFACK) {
- rte_delay_ms(1);
- if (count++ == 1000) {
- ret = LIO_MBOX_STATUS_FAILED;
- break;
- }
- }
- rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
- }
- }
-
- if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- } else {
- if ((!mbox_cmd->msg.s.resp_needed) ||
- (ret == LIO_MBOX_STATUS_FAILED)) {
- mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
- if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
- LIO_MBOX_STATE_REQ_RECEIVED)))
- mbox->state = LIO_MBOX_STATE_IDLE;
- }
- }
-
- return ret;
-}
-
-/**
- * lio_mbox_process_cmd:
- * @mbox: Pointer mailbox
- * @mbox_cmd: Pointer to command received
- *
- * Process the cmd received in mbox
- */
-static int
-lio_mbox_process_cmd(struct lio_mbox *mbox,
- struct lio_mbox_cmd *mbox_cmd)
-{
- struct lio_device *lio_dev = mbox->lio_dev;
-
- if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
- lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
-
- return 0;
-}
-
-/**
- * Process the received mbox message.
- */
-int
-lio_mbox_process_message(struct lio_mbox *mbox)
-{
- struct lio_mbox_cmd mbox_cmd;
-
- if (mbox->state & LIO_MBOX_STATE_ERROR) {
- if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
- LIO_MBOX_STATE_RES_RECEIVING)) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- mbox_cmd.recv_status = 1;
- if (mbox_cmd.fn)
- mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
- mbox_cmd.fn_arg);
-
- return 0;
- }
-
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
- return 0;
- }
-
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- mbox_cmd.recv_status = 0;
- if (mbox_cmd.fn)
- mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
-
- return 0;
- }
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_req,
- sizeof(struct lio_mbox_cmd));
- if (!mbox_cmd.msg.s.resp_needed) {
- mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
- if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- }
-
- lio_mbox_process_cmd(mbox, &mbox_cmd);
-
- return 0;
- }
-
- RTE_ASSERT(0);
-
- return 0;
-}
diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
deleted file mode 100644
index 457917e91f..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.h
+++ /dev/null
@@ -1,102 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_MBOX_H_
-#define _LIO_MBOX_H_
-
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-
-/* Macros for Mail Box Communication */
-
-#define LIO_MBOX_DATA_MAX 32
-
-#define LIO_VF_ACTIVE 0x1
-#define LIO_VF_FLR_REQUEST 0x2
-#define LIO_CORES_CRASHED 0x3
-
-/* Macro for Read acknowledgment */
-#define LIO_PFVFACK 0xffffffffffffffff
-#define LIO_PFVFSIG 0x1122334455667788
-#define LIO_PFVFERR 0xDEADDEADDEADDEAD
-
-enum lio_mbox_cmd_status {
- LIO_MBOX_STATUS_SUCCESS = 0,
- LIO_MBOX_STATUS_FAILED = 1,
- LIO_MBOX_STATUS_BUSY = 2
-};
-
-enum lio_mbox_message_type {
- LIO_MBOX_REQUEST = 0,
- LIO_MBOX_RESPONSE = 1
-};
-
-union lio_mbox_message {
- uint64_t mbox_msg64;
- struct {
- uint16_t type : 1;
- uint16_t resp_needed : 1;
- uint16_t cmd : 6;
- uint16_t len : 8;
- uint8_t params[6];
- } s;
-};
-
-typedef void (*lio_mbox_callback)(void *, void *, void *);
-
-struct lio_mbox_cmd {
- union lio_mbox_message msg;
- uint64_t data[LIO_MBOX_DATA_MAX];
- uint32_t q_no;
- uint32_t recv_len;
- uint32_t recv_status;
- lio_mbox_callback fn;
- void *fn_arg;
-};
-
-enum lio_mbox_state {
- LIO_MBOX_STATE_IDLE = 1,
- LIO_MBOX_STATE_REQ_RECEIVING = 2,
- LIO_MBOX_STATE_REQ_RECEIVED = 4,
- LIO_MBOX_STATE_RES_PENDING = 8,
- LIO_MBOX_STATE_RES_RECEIVING = 16,
- LIO_MBOX_STATE_RES_RECEIVED = 16,
- LIO_MBOX_STATE_ERROR = 32
-};
-
-struct lio_mbox {
- /* A spinlock to protect access to this q_mbox. */
- rte_spinlock_t lock;
-
- struct lio_device *lio_dev;
-
- uint32_t q_no;
-
- enum lio_mbox_state state;
-
- /* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
- void *mbox_int_reg;
-
- /* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
- * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
- */
- void *mbox_write_reg;
-
- /* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
- * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
- */
- void *mbox_read_reg;
-
- struct lio_mbox_cmd mbox_req;
-
- struct lio_mbox_cmd mbox_resp;
-
-};
-
-int lio_mbox_read(struct lio_mbox *mbox);
-int lio_mbox_write(struct lio_device *lio_dev,
- struct lio_mbox_cmd *mbox_cmd);
-int lio_mbox_process_message(struct lio_mbox *mbox);
-#endif /* _LIO_MBOX_H_ */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
deleted file mode 100644
index ebcfbb1a5c..0000000000
--- a/drivers/net/liquidio/lio_ethdev.c
+++ /dev/null
@@ -1,2147 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-#include <rte_alarm.h>
-#include <rte_ether.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-/* Default RSS key in use */
-static uint8_t lio_rss_key[40] = {
- 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
- 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
- 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
- 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
- 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
-};
-
-static const struct rte_eth_desc_lim lio_rx_desc_lim = {
- .nb_max = CN23XX_MAX_OQ_DESCRIPTORS,
- .nb_min = CN23XX_MIN_OQ_DESCRIPTORS,
- .nb_align = 1,
-};
-
-static const struct rte_eth_desc_lim lio_tx_desc_lim = {
- .nb_max = CN23XX_MAX_IQ_DESCRIPTORS,
- .nb_min = CN23XX_MIN_IQ_DESCRIPTORS,
- .nb_align = 1,
-};
-
-/* Wait for control command to reach nic. */
-static uint16_t
-lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
- struct lio_dev_ctrl_cmd *ctrl_cmd)
-{
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-
- while ((ctrl_cmd->cond == 0) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
- rte_delay_ms(1);
- }
-
- return !timeout;
-}
-
-/**
- * \brief Send Rx control command
- * @param eth_dev Pointer to the structure rte_eth_dev
- * @param start_stop whether to start or stop
- */
-static int
-lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
- ctrl_pkt.ncmd.s.param1 = start_stop;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send RX Control message\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "RX Control command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/* store statistics names and its offset in stats structure */
-struct rte_lio_xstats_name_off {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- unsigned int offset;
-};
-
-static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
- {"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
- {"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
- {"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
- {"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
- {"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
- {"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
- {"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
- {"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
- {"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
- {"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
- {"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
- {"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
- {"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
- {"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_broadcast_pkts",
- (offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_multicast_pkts",
- (offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_total_collisions", (offsetof(struct octeon_tx_stats,
- total_collisions)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
- sizeof(struct octeon_rx_stats)},
-};
-
-#define LIO_NB_XSTATS RTE_DIM(rte_lio_stats_strings)
-
-/* Get hw stats of the port */
-static int
-lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- struct octeon_link_stats *hw_stats;
- struct lio_link_stats_resp *resp;
- struct lio_soft_command *sc;
- uint32_t resp_size;
- unsigned int i;
- int retval;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (n < LIO_NB_XSTATS)
- return LIO_NB_XSTATS;
-
- resp_size = sizeof(struct lio_link_stats_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return -ENOMEM;
-
- resp = (struct lio_link_stats_resp *)sc->virtrptr;
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_PORT_STATS, 0, 0, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
- retval);
- goto get_stats_fail;
- }
-
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- lio_process_ordered_list(lio_dev);
- rte_delay_ms(1);
- }
-
- retval = resp->status;
- if (retval) {
- lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
- goto get_stats_fail;
- }
-
- lio_swap_8B_data((uint64_t *)(&resp->link_stats),
- sizeof(struct octeon_link_stats) >> 3);
-
- hw_stats = &resp->link_stats;
-
- for (i = 0; i < LIO_NB_XSTATS; i++) {
- xstats[i].id = i;
- xstats[i].value =
- *(uint64_t *)(((char *)hw_stats) +
- rte_lio_stats_strings[i].offset);
- }
-
- lio_free_soft_command(sc);
-
- return LIO_NB_XSTATS;
-
-get_stats_fail:
- lio_free_soft_command(sc);
-
- return -1;
-}
-
-static int
-lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned limit __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- unsigned int i;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (xstats_names == NULL)
- return LIO_NB_XSTATS;
-
- /* Note: limit checked in rte_eth_xstats_names() */
-
- for (i = 0; i < LIO_NB_XSTATS; i++) {
- snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
- "%s", rte_lio_stats_strings[i].name);
- }
-
- return LIO_NB_XSTATS;
-}
-
-/* Reset hw stats for the port */
-static int
-lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
- int ret;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
- if (ret != 0) {
- lio_dev_err(lio_dev, "Failed to send clear stats command\n");
- return ret;
- }
-
- ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
- if (ret != 0) {
- lio_dev_err(lio_dev, "Clear stats command timed out\n");
- return ret;
- }
-
- /* clear stored per queue stats */
- if (*eth_dev->dev_ops->stats_reset == NULL)
- return 0;
- return (*eth_dev->dev_ops->stats_reset)(eth_dev);
-}
-
-/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
-static int
-lio_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_droq_stats *oq_stats;
- struct lio_iq_stats *iq_stats;
- struct lio_instr_queue *txq;
- struct lio_droq *droq;
- int i, iq_no, oq_no;
- uint64_t bytes = 0;
- uint64_t pkts = 0;
- uint64_t drop = 0;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- iq_no = lio_dev->linfo.txpciq[i].s.q_no;
- txq = lio_dev->instr_queue[iq_no];
- if (txq != NULL) {
- iq_stats = &txq->stats;
- pkts += iq_stats->tx_done;
- drop += iq_stats->tx_dropped;
- bytes += iq_stats->tx_tot_bytes;
- }
- }
-
- stats->opackets = pkts;
- stats->obytes = bytes;
- stats->oerrors = drop;
-
- pkts = 0;
- drop = 0;
- bytes = 0;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
- droq = lio_dev->droq[oq_no];
- if (droq != NULL) {
- oq_stats = &droq->stats;
- pkts += oq_stats->rx_pkts_received;
- drop += (oq_stats->rx_dropped +
- oq_stats->dropped_toomany +
- oq_stats->dropped_nomem);
- bytes += oq_stats->rx_bytes_received;
- }
- }
- stats->ibytes = bytes;
- stats->ipackets = pkts;
- stats->ierrors = drop;
-
- return 0;
-}
-
-static int
-lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_droq_stats *oq_stats;
- struct lio_iq_stats *iq_stats;
- struct lio_instr_queue *txq;
- struct lio_droq *droq;
- int i, iq_no, oq_no;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- iq_no = lio_dev->linfo.txpciq[i].s.q_no;
- txq = lio_dev->instr_queue[iq_no];
- if (txq != NULL) {
- iq_stats = &txq->stats;
- memset(iq_stats, 0, sizeof(struct lio_iq_stats));
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
- droq = lio_dev->droq[oq_no];
- if (droq != NULL) {
- oq_stats = &droq->stats;
- memset(oq_stats, 0, sizeof(struct lio_droq_stats));
- }
- }
-
- return 0;
-}
-
-static int
-lio_dev_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *devinfo)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- switch (pci_dev->id.subsystem_device_id) {
- /* CN23xx 10G cards */
- case PCI_SUBSYS_DEV_ID_CN2350_210:
- case PCI_SUBSYS_DEV_ID_CN2360_210:
- case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
- case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
- case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
- case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
- break;
- /* CN23xx 25G cards */
- case PCI_SUBSYS_DEV_ID_CN2350_225:
- case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
- break;
- default:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
- lio_dev_err(lio_dev,
- "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
- return -EINVAL;
- }
-
- devinfo->max_rx_queues = lio_dev->max_rx_queues;
- devinfo->max_tx_queues = lio_dev->max_tx_queues;
-
- devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
- devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
-
- devinfo->max_mac_addrs = 1;
-
- devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
- RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
-
- devinfo->rx_desc_lim = lio_rx_desc_lim;
- devinfo->tx_desc_lim = lio_tx_desc_lim;
-
- devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
- devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
- RTE_ETH_RSS_NONFRAG_IPV4_TCP |
- RTE_ETH_RSS_IPV6 |
- RTE_ETH_RSS_NONFRAG_IPV6_TCP |
- RTE_ETH_RSS_IPV6_EX |
- RTE_ETH_RSS_IPV6_TCP_EX);
- return 0;
-}
-
-static int
-lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- PMD_INIT_FUNC_TRACE();
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
- ctrl_pkt.ncmd.s.param1 = mtu;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Command to change MTU timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct lio_rss_set *rss_param;
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
- int i, j, index;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
- lio_dev_err(lio_dev,
- "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
- reta_size, LIO_RSS_MAX_TABLE_SZ);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
- ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- rss_param->param.flags = 0xF;
- rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
- rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
-
- for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
- rss_state->itable[index] = reta_conf[i].reta[j];
- }
- }
- }
-
- rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
- memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
-
- lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to set rss hash\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Set rss hash timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- int i, num;
-
- if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
- lio_dev_err(lio_dev,
- "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
- reta_size, LIO_RSS_MAX_TABLE_SZ);
- return -EINVAL;
- }
-
- num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
-
- for (i = 0; i < num; i++) {
- memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
- RTE_ETH_RETA_GROUP_SIZE);
- reta_conf++;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- uint8_t *hash_key = NULL;
- uint64_t rss_hf = 0;
-
- if (rss_state->hash_disable) {
- lio_dev_info(lio_dev, "RSS disabled in nic\n");
- rss_conf->rss_hf = 0;
- return 0;
- }
-
- /* Get key value */
- hash_key = rss_conf->rss_key;
- if (hash_key != NULL)
- memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
-
- if (rss_state->ip)
- rss_hf |= RTE_ETH_RSS_IPV4;
- if (rss_state->tcp_hash)
- rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
- if (rss_state->ipv6)
- rss_hf |= RTE_ETH_RSS_IPV6;
- if (rss_state->ipv6_tcp_hash)
- rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
- if (rss_state->ipv6_ex)
- rss_hf |= RTE_ETH_RSS_IPV6_EX;
- if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
-
- rss_conf->rss_hf = rss_hf;
-
- return 0;
-}
-
-static int
-lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct lio_rss_set *rss_param;
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
- ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- rss_param->param.flags = 0xF;
-
- if (rss_conf->rss_key) {
- rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
- rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
- memcpy(rss_state->hash_key, rss_conf->rss_key,
- rss_state->hash_key_size);
- memcpy(rss_param->key, rss_state->hash_key,
- rss_state->hash_key_size);
- }
-
- if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
- /* Can't disable rss through hash flags,
- * if it is enabled by default during init
- */
- if (!rss_state->hash_disable)
- return -EINVAL;
-
- /* This is for --disable-rss during testpmd launch */
- rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
- } else {
- uint32_t hashinfo = 0;
-
- /* Can't enable rss if disabled by default during init */
- if (rss_state->hash_disable)
- return -EINVAL;
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
- hashinfo |= LIO_RSS_HASH_IPV4;
- rss_state->ip = 1;
- } else {
- rss_state->ip = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV4;
- rss_state->tcp_hash = 1;
- } else {
- rss_state->tcp_hash = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
- hashinfo |= LIO_RSS_HASH_IPV6;
- rss_state->ipv6 = 1;
- } else {
- rss_state->ipv6 = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV6;
- rss_state->ipv6_tcp_hash = 1;
- } else {
- rss_state->ipv6_tcp_hash = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
- hashinfo |= LIO_RSS_HASH_IPV6_EX;
- rss_state->ipv6_ex = 1;
- } else {
- rss_state->ipv6_ex = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
- rss_state->ipv6_tcp_ex_hash = 1;
- } else {
- rss_state->ipv6_tcp_ex_hash = 0;
- }
-
- rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
- rss_param->param.hashinfo = hashinfo;
- }
-
- lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to set rss hash\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Set rss hash timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Add vxlan dest udp port for an interface.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param udp_tnl
- * udp tunnel conf
- *
- * @return
- * On success return 0
- * On failure return -1
- */
-static int
-lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *udp_tnl)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (udp_tnl == NULL)
- return -EINVAL;
-
- if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
- lio_dev_err(lio_dev, "Unsupported tunnel type\n");
- return -1;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
- ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
- ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Remove vxlan dest udp port for an interface.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param udp_tnl
- * udp tunnel conf
- *
- * @return
- * On success return 0
- * On failure return -1
- */
-static int
-lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *udp_tnl)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (udp_tnl == NULL)
- return -EINVAL;
-
- if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
- lio_dev_err(lio_dev, "Unsupported tunnel type\n");
- return -1;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
- ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
- ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (lio_dev->linfo.vlan_is_admin_assigned)
- return -EPERM;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = on ?
- LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
- ctrl_pkt.ncmd.s.param1 = vlan_id;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
- on ? "add" : "remove");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
- on ? "add" : "remove");
- return -1;
- }
-
- return 0;
-}
-
-static uint64_t
-lio_hweight64(uint64_t w)
-{
- uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
-
- res =
- (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
- res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
- res = res + (res >> 8);
- res = res + (res >> 16);
-
- return (res + (res >> 32)) & 0x00000000000000FFul;
-}
-
-static int
-lio_dev_link_update(struct rte_eth_dev *eth_dev,
- int wait_to_complete __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_eth_link link;
-
- /* Initialize */
- memset(&link, 0, sizeof(link));
- link.link_status = RTE_ETH_LINK_DOWN;
- link.link_speed = RTE_ETH_SPEED_NUM_NONE;
- link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- /* Return what we found */
- if (lio_dev->linfo.link.s.link_up == 0) {
- /* Interface is down */
- return rte_eth_linkstatus_set(eth_dev, &link);
- }
-
- link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
- link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- switch (lio_dev->linfo.link.s.speed) {
- case LIO_LINK_SPEED_10000:
- link.link_speed = RTE_ETH_SPEED_NUM_10G;
- break;
- case LIO_LINK_SPEED_25000:
- link.link_speed = RTE_ETH_SPEED_NUM_25G;
- break;
- default:
- link.link_speed = RTE_ETH_SPEED_NUM_NONE;
- link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
- }
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-/**
- * \brief Net device enable, disable allmulticast
- * @param eth_dev Pointer to the structure rte_eth_dev
- *
- * @return
- * On success return 0
- * On failure return negative errno
- */
-static int
-lio_change_dev_flag(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- /* Create a ctrl pkt command to be sent to core app. */
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
- ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send change flag message\n");
- return -EAGAIN;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Change dev flag command timed out\n");
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-static int
-lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_VF_TRUST_MIN_VERSION);
- return -EAGAIN;
- }
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_VF_TRUST_MIN_VERSION);
- return -EAGAIN;
- }
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
- return lio_change_dev_flag(eth_dev);
-}
-
-static void
-lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct rte_eth_rss_reta_entry64 reta_conf[8];
- struct rte_eth_rss_conf rss_conf;
- uint16_t i;
-
- /* Configure the RSS key and the RSS protocols used to compute
- * the RSS hash of input packets.
- */
- rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
- if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
- rss_state->hash_disable = 1;
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
- return;
- }
-
- if (rss_conf.rss_key == NULL)
- rss_conf.rss_key = lio_rss_key; /* Default hash key */
-
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
-
- memset(reta_conf, 0, sizeof(reta_conf));
- for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
- uint8_t q_idx, conf_idx, reta_idx;
-
- q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
- i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
- reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
- reta_conf[conf_idx].reta[reta_idx] = q_idx;
- reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
- }
-
- lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
-}
-
-static void
-lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct rte_eth_rss_conf rss_conf;
-
- switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case RTE_ETH_MQ_RX_RSS:
- lio_dev_rss_configure(eth_dev);
- break;
- case RTE_ETH_MQ_RX_NONE:
- /* if mq_mode is none, disable rss mode. */
- default:
- memset(&rss_conf, 0, sizeof(rss_conf));
- rss_state->hash_disable = 1;
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
- }
-}
-
-/**
- * Setup our receive queue/ringbuffer. This is the
- * queue the Octeon uses to send us packets and
- * responses. We are given a memory pool for our
- * packet buffers that are used to populate the receive
- * queue.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param q_no
- * Queue number
- * @param num_rx_descs
- * Number of entries in the queue
- * @param socket_id
- * Where to allocate memory
- * @param rx_conf
- * Pointer to the struction rte_eth_rxconf
- * @param mp
- * Pointer to the packet pool
- *
- * @return
- * - On success, return 0
- * - On failure, return -1
- */
-static int
-lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
- uint16_t num_rx_descs, unsigned int socket_id,
- const struct rte_eth_rxconf *rx_conf __rte_unused,
- struct rte_mempool *mp)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_pktmbuf_pool_private *mbp_priv;
- uint32_t fw_mapped_oq;
- uint16_t buf_size;
-
- if (q_no >= lio_dev->nb_rx_queues) {
- lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
- return -EINVAL;
- }
-
- lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
-
- fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
-
- /* Free previous allocation if any */
- if (eth_dev->data->rx_queues[q_no] != NULL) {
- lio_dev_rx_queue_release(eth_dev, q_no);
- eth_dev->data->rx_queues[q_no] = NULL;
- }
-
- mbp_priv = rte_mempool_get_priv(mp);
- buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
- socket_id)) {
- lio_dev_err(lio_dev, "droq allocation failed\n");
- return -1;
- }
-
- eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
-
- return 0;
-}
-
-/**
- * Release the receive queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- * Pointer to Ethernet device structure.
- * @param q_no
- * Receive queue index.
- *
- * @return
- * - nothing
- */
-void
-lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
- struct lio_droq *droq = dev->data->rx_queues[q_no];
- int oq_no;
-
- if (droq) {
- oq_no = droq->q_no;
- lio_delete_droq_queue(droq->lio_dev, oq_no);
- }
-}
-
-/**
- * Allocate and initialize SW ring. Initialize associated HW registers.
- *
- * @param eth_dev
- * Pointer to structure rte_eth_dev
- *
- * @param q_no
- * Queue number
- *
- * @param num_tx_descs
- * Number of ringbuffer descriptors
- *
- * @param socket_id
- * NUMA socket id, used for memory allocations
- *
- * @param tx_conf
- * Pointer to the structure rte_eth_txconf
- *
- * @return
- * - On success, return 0
- * - On failure, return -errno value
- */
-static int
-lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
- uint16_t num_tx_descs, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
- int retval;
-
- if (q_no >= lio_dev->nb_tx_queues) {
- lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
- return -EINVAL;
- }
-
- lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
-
- /* Free previous allocation if any */
- if (eth_dev->data->tx_queues[q_no] != NULL) {
- lio_dev_tx_queue_release(eth_dev, q_no);
- eth_dev->data->tx_queues[q_no] = NULL;
- }
-
- retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
- num_tx_descs, lio_dev, socket_id);
-
- if (retval) {
- lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
- return retval;
- }
-
- retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
- lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
- socket_id);
-
- if (retval) {
- lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
- return retval;
- }
-
- eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
-
- return 0;
-}
-
-/**
- * Release the transmit queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- * Pointer to Ethernet device structure.
- * @param q_no
- * Transmit queue index.
- *
- * @return
- * - nothing
- */
-void
-lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
- struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
- uint32_t fw_mapped_iq_no;
-
-
- if (tq) {
- /* Free sg_list */
- lio_delete_sglist(tq);
-
- fw_mapped_iq_no = tq->txpciq.s.q_no;
- lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
- }
-}
-
-/**
- * Api to check link state.
- */
-static void
-lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- struct lio_link_status_resp *resp;
- union octeon_link_status *ls;
- struct lio_soft_command *sc;
- uint32_t resp_size;
-
- if (!lio_dev->intf_open)
- return;
-
- resp_size = sizeof(struct lio_link_status_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return;
-
- resp = (struct lio_link_status_resp *)sc->virtrptr;
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_INFO, 0, 0, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
- goto get_status_fail;
-
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- rte_delay_ms(1);
- }
-
- if (resp->status)
- goto get_status_fail;
-
- ls = &resp->link_info.link;
-
- lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
-
- if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
- if (ls->s.mtu < eth_dev->data->mtu) {
- lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
- ls->s.mtu);
- eth_dev->data->mtu = ls->s.mtu;
- }
- lio_dev->linfo.link.link_status64 = ls->link_status64;
- lio_dev_link_update(eth_dev, 0);
- }
-
- lio_free_soft_command(sc);
-
- return;
-
-get_status_fail:
- lio_free_soft_command(sc);
-}
-
-/* This function will be invoked every LSC_TIMEOUT ns (100ms)
- * and will update link state if it changes.
- */
-static void
-lio_sync_link_state_check(void *eth_dev)
-{
- struct lio_device *lio_dev =
- (((struct rte_eth_dev *)eth_dev)->data->dev_private);
-
- if (lio_dev->port_configured)
- lio_dev_get_link_status(eth_dev);
-
- /* Schedule periodic link status check.
- * Stop check if interface is close and start again while opening.
- */
- if (lio_dev->intf_open)
- rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
- eth_dev);
-}
-
-static int
-lio_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- int ret = 0;
-
- lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
-
- if (lio_dev->fn_list.enable_io_queues(lio_dev))
- return -1;
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 1))
- return -1;
-
- /* Ready for link status updates */
- lio_dev->intf_open = 1;
- rte_mb();
-
- /* Configure RSS if device configured with multiple RX queues. */
- lio_dev_mq_rx_configure(eth_dev);
-
- /* Before update the link info,
- * must set linfo.link.link_status64 to 0.
- */
- lio_dev->linfo.link.link_status64 = 0;
-
- /* start polling for lsc */
- ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
- lio_sync_link_state_check,
- eth_dev);
- if (ret) {
- lio_dev_err(lio_dev,
- "link state check handler creation failed\n");
- goto dev_lsc_handle_error;
- }
-
- while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
- rte_delay_ms(1);
-
- if (lio_dev->linfo.link.link_status64 == 0) {
- ret = -1;
- goto dev_mtu_set_error;
- }
-
- ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
- if (ret != 0)
- goto dev_mtu_set_error;
-
- return 0;
-
-dev_mtu_set_error:
- rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
-dev_lsc_handle_error:
- lio_dev->intf_open = 0;
- lio_send_rx_ctrl_cmd(eth_dev, 0);
-
- return ret;
-}
-
-/* Stop device and disable input/output functions */
-static int
-lio_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
- eth_dev->data->dev_started = 0;
- lio_dev->intf_open = 0;
- rte_mb();
-
- /* Cancel callback if still running. */
- rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
- lio_send_rx_ctrl_cmd(eth_dev, 0);
-
- lio_wait_for_instr_fetch(lio_dev);
-
- /* Clear recorded link status */
- lio_dev->linfo.link.link_status64 = 0;
-
- return 0;
-}
-
-static int
-lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
- return 0;
- }
-
- if (lio_dev->linfo.link.s.link_up) {
- lio_dev_info(lio_dev, "Link is already UP\n");
- return 0;
- }
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
- lio_dev_err(lio_dev, "Unable to set Link UP\n");
- return -1;
- }
-
- lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
- return 0;
-}
-
-static int
-lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
- return 0;
- }
-
- if (!lio_dev->linfo.link.s.link_up) {
- lio_dev_info(lio_dev, "Link is already DOWN\n");
- return 0;
- }
-
- lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
- lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
- lio_dev_err(lio_dev, "Unable to set Link Down\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Reset and stop the device. This occurs on the first
- * call to this routine. Subsequent calls will simply
- * return. NB: This will require the NIC to be rebooted.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- *
- * @return
- * - nothing
- */
-static int
-lio_dev_close(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int ret = 0;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
-
- if (lio_dev->intf_open)
- ret = lio_dev_stop(eth_dev);
-
- /* Reset ioq regs */
- lio_dev->fn_list.setup_device_regs(lio_dev);
-
- if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
- cn23xx_vf_ask_pf_to_do_flr(lio_dev);
- rte_delay_ms(LIO_PCI_FLR_WAIT);
- }
-
- /* lio_free_mbox */
- lio_dev->fn_list.free_mbox(lio_dev);
-
- /* Free glist resources */
- rte_free(lio_dev->glist_head);
- rte_free(lio_dev->glist_lock);
- lio_dev->glist_head = NULL;
- lio_dev->glist_lock = NULL;
-
- lio_dev->port_configured = 0;
-
- /* Delete all queues */
- lio_dev_clear_queues(eth_dev);
-
- return ret;
-}
-
-/**
- * Enable tunnel rx checksum verification from firmware.
- */
-static void
-lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
- ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
- return;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
- lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
-}
-
-/**
- * Enable checksum calculation for inner packet in a tunnel.
- */
-static void
-lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
- ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
- return;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
- lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
-}
-
-static int
-lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
- int num_rxq)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_Q_RECONF_MIN_VERSION);
- return -ENOTSUP;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
- ctrl_pkt.ncmd.s.param1 = num_txq;
- ctrl_pkt.ncmd.s.param2 = num_rxq;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send queue count control command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Queue count control command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int ret;
-
- if (lio_dev->nb_rx_queues != num_rxq ||
- lio_dev->nb_tx_queues != num_txq) {
- if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
- return -1;
- lio_dev->nb_rx_queues = num_rxq;
- lio_dev->nb_tx_queues = num_txq;
- }
-
- if (lio_dev->intf_open) {
- ret = lio_dev_stop(eth_dev);
- if (ret != 0)
- return ret;
- }
-
- /* Reset ioq registers */
- if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to configure device registers\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- int retval, num_iqueues, num_oqueues;
- uint8_t mac[RTE_ETHER_ADDR_LEN], i;
- struct lio_if_cfg_resp *resp;
- struct lio_soft_command *sc;
- union lio_if_cfg if_cfg;
- uint32_t resp_size;
-
- PMD_INIT_FUNC_TRACE();
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
- eth_dev->data->dev_conf.rxmode.offloads |=
- RTE_ETH_RX_OFFLOAD_RSS_HASH;
-
- /* Inform firmware about change in number of queues to use.
- * Disable IO queues and reset registers for re-configuration.
- */
- if (lio_dev->port_configured)
- return lio_reconf_queues(eth_dev,
- eth_dev->data->nb_tx_queues,
- eth_dev->data->nb_rx_queues);
-
- lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
- lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
-
- /* Set max number of queues which can be re-configured. */
- lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
- lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
-
- resp_size = sizeof(struct lio_if_cfg_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return -ENOMEM;
-
- resp = (struct lio_if_cfg_resp *)sc->virtrptr;
-
- /* Firmware doesn't have capability to reconfigure the queues,
- * Claim all queues, and use as many required
- */
- if_cfg.if_cfg64 = 0;
- if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
- if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
- if_cfg.s.base_queue = 0;
-
- if_cfg.s.gmx_port_id = lio_dev->pf_num;
-
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_IF_CFG, 0,
- if_cfg.if_cfg64, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
- retval);
- /* Soft instr is freed by driver in case of failure. */
- goto nic_config_fail;
- }
-
- /* Sleep on a wait queue till the cond flag indicates that the
- * response arrived or timed-out.
- */
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- lio_process_ordered_list(lio_dev);
- rte_delay_ms(1);
- }
-
- retval = resp->status;
- if (retval) {
- lio_dev_err(lio_dev, "iq/oq config failed\n");
- goto nic_config_fail;
- }
-
- strlcpy(lio_dev->firmware_version,
- resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
-
- lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
- sizeof(struct octeon_if_cfg_info) >> 3);
-
- num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
- num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
-
- if (!(num_iqueues) || !(num_oqueues)) {
- lio_dev_err(lio_dev,
- "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
- (unsigned long)resp->cfg_info.iqmask,
- (unsigned long)resp->cfg_info.oqmask);
- goto nic_config_fail;
- }
-
- lio_dev_dbg(lio_dev,
- "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
- eth_dev->data->port_id,
- (unsigned long)resp->cfg_info.iqmask,
- (unsigned long)resp->cfg_info.oqmask,
- num_iqueues, num_oqueues);
-
- lio_dev->linfo.num_rxpciq = num_oqueues;
- lio_dev->linfo.num_txpciq = num_iqueues;
-
- for (i = 0; i < num_oqueues; i++) {
- lio_dev->linfo.rxpciq[i].rxpciq64 =
- resp->cfg_info.linfo.rxpciq[i].rxpciq64;
- lio_dev_dbg(lio_dev, "index %d OQ %d\n",
- i, lio_dev->linfo.rxpciq[i].s.q_no);
- }
-
- for (i = 0; i < num_iqueues; i++) {
- lio_dev->linfo.txpciq[i].txpciq64 =
- resp->cfg_info.linfo.txpciq[i].txpciq64;
- lio_dev_dbg(lio_dev, "index %d IQ %d\n",
- i, lio_dev->linfo.txpciq[i].s.q_no);
- }
-
- lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
- lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
- lio_dev->linfo.link.link_status64 =
- resp->cfg_info.linfo.link.link_status64;
-
- /* 64-bit swap required on LE machines */
- lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
- for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
- mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
- 2 + i));
-
- /* Copy the permanent MAC address */
- rte_ether_addr_copy((struct rte_ether_addr *)mac,
- ð_dev->data->mac_addrs[0]);
-
- /* enable firmware checksum support for tunnel packets */
- lio_enable_hw_tunnel_rx_checksum(eth_dev);
- lio_enable_hw_tunnel_tx_checksum(eth_dev);
-
- lio_dev->glist_lock =
- rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
- if (lio_dev->glist_lock == NULL)
- return -ENOMEM;
-
- lio_dev->glist_head =
- rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
- 0);
- if (lio_dev->glist_head == NULL) {
- rte_free(lio_dev->glist_lock);
- lio_dev->glist_lock = NULL;
- return -ENOMEM;
- }
-
- lio_dev_link_update(eth_dev, 0);
-
- lio_dev->port_configured = 1;
-
- lio_free_soft_command(sc);
-
- /* Reset ioq regs */
- lio_dev->fn_list.setup_device_regs(lio_dev);
-
- /* Free iq_0 used during init */
- lio_free_instr_queue0(lio_dev);
-
- return 0;
-
-nic_config_fail:
- lio_dev_err(lio_dev, "Failed retval %d\n", retval);
- lio_free_soft_command(sc);
- lio_free_instr_queue0(lio_dev);
-
- return -ENODEV;
-}
-
-/* Define our ethernet definitions */
-static const struct eth_dev_ops liovf_eth_dev_ops = {
- .dev_configure = lio_dev_configure,
- .dev_start = lio_dev_start,
- .dev_stop = lio_dev_stop,
- .dev_set_link_up = lio_dev_set_link_up,
- .dev_set_link_down = lio_dev_set_link_down,
- .dev_close = lio_dev_close,
- .promiscuous_enable = lio_dev_promiscuous_enable,
- .promiscuous_disable = lio_dev_promiscuous_disable,
- .allmulticast_enable = lio_dev_allmulticast_enable,
- .allmulticast_disable = lio_dev_allmulticast_disable,
- .link_update = lio_dev_link_update,
- .stats_get = lio_dev_stats_get,
- .xstats_get = lio_dev_xstats_get,
- .xstats_get_names = lio_dev_xstats_get_names,
- .stats_reset = lio_dev_stats_reset,
- .xstats_reset = lio_dev_xstats_reset,
- .dev_infos_get = lio_dev_info_get,
- .vlan_filter_set = lio_dev_vlan_filter_set,
- .rx_queue_setup = lio_dev_rx_queue_setup,
- .rx_queue_release = lio_dev_rx_queue_release,
- .tx_queue_setup = lio_dev_tx_queue_setup,
- .tx_queue_release = lio_dev_tx_queue_release,
- .reta_update = lio_dev_rss_reta_update,
- .reta_query = lio_dev_rss_reta_query,
- .rss_hash_conf_get = lio_dev_rss_hash_conf_get,
- .rss_hash_update = lio_dev_rss_hash_update,
- .udp_tunnel_port_add = lio_dev_udp_tunnel_add,
- .udp_tunnel_port_del = lio_dev_udp_tunnel_del,
- .mtu_set = lio_dev_mtu_set,
-};
-
-static void
-lio_check_pf_hs_response(void *lio_dev)
-{
- struct lio_device *dev = lio_dev;
-
- /* check till response arrives */
- if (dev->pfvf_hsword.coproc_tics_per_us)
- return;
-
- cn23xx_vf_handle_mbox(dev);
-
- rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
-}
-
-/**
- * \brief Identify the LIO device and to map the BAR address space
- * @param lio_dev lio device
- */
-static int
-lio_chip_specific_setup(struct lio_device *lio_dev)
-{
- struct rte_pci_device *pdev = lio_dev->pci_dev;
- uint32_t dev_id = pdev->id.device_id;
- const char *s;
- int ret = 1;
-
- switch (dev_id) {
- case LIO_CN23XX_VF_VID:
- lio_dev->chip_id = LIO_CN23XX_VF_VID;
- ret = cn23xx_vf_setup_device(lio_dev);
- s = "CN23XX VF";
- break;
- default:
- s = "?";
- lio_dev_err(lio_dev, "Unsupported Chip\n");
- }
-
- if (!ret)
- lio_dev_info(lio_dev, "DEVICE : %s\n", s);
-
- return ret;
-}
-
-static int
-lio_first_time_init(struct lio_device *lio_dev,
- struct rte_pci_device *pdev)
-{
- int dpdk_queues;
-
- PMD_INIT_FUNC_TRACE();
-
- /* set dpdk specific pci device pointer */
- lio_dev->pci_dev = pdev;
-
- /* Identify the LIO type and set device ops */
- if (lio_chip_specific_setup(lio_dev)) {
- lio_dev_err(lio_dev, "Chip specific setup failed\n");
- return -1;
- }
-
- /* Initialize soft command buffer pool */
- if (lio_setup_sc_buffer_pool(lio_dev)) {
- lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
- return -1;
- }
-
- /* Initialize lists to manage the requests of different types that
- * arrive from applications for this lio device.
- */
- lio_setup_response_list(lio_dev);
-
- if (lio_dev->fn_list.setup_mbox(lio_dev)) {
- lio_dev_err(lio_dev, "Mailbox setup failed\n");
- goto error;
- }
-
- /* Check PF response */
- lio_check_pf_hs_response((void *)lio_dev);
-
- /* Do handshake and exit if incompatible PF driver */
- if (cn23xx_pfvf_handshake(lio_dev))
- goto error;
-
- /* Request and wait for device reset. */
- if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
- cn23xx_vf_ask_pf_to_do_flr(lio_dev);
- /* FLR wait time doubled as a precaution. */
- rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
- }
-
- if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to configure device registers\n");
- goto error;
- }
-
- if (lio_setup_instr_queue0(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
- goto error;
- }
-
- dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
-
- lio_dev->max_tx_queues = dpdk_queues;
- lio_dev->max_rx_queues = dpdk_queues;
-
- /* Enable input and output queues for this device */
- if (lio_dev->fn_list.enable_io_queues(lio_dev))
- goto error;
-
- return 0;
-
-error:
- lio_free_sc_buffer_pool(lio_dev);
- if (lio_dev->mbox[0])
- lio_dev->fn_list.free_mbox(lio_dev);
- if (lio_dev->instr_queue[0])
- lio_free_instr_queue0(lio_dev);
-
- return -1;
-}
-
-static int
-lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- PMD_INIT_FUNC_TRACE();
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* lio_free_sc_buffer_pool */
- lio_free_sc_buffer_pool(lio_dev);
-
- return 0;
-}
-
-static int
-lio_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- PMD_INIT_FUNC_TRACE();
-
- eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
- eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
-
- /* Primary does the initialization. */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- rte_eth_copy_pci_info(eth_dev, pdev);
-
- if (pdev->mem_resource[0].addr) {
- lio_dev->hw_addr = pdev->mem_resource[0].addr;
- } else {
- PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
- return -ENODEV;
- }
-
- lio_dev->eth_dev = eth_dev;
- /* set lio device print string */
- snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
- "%s[%02x:%02x.%x]", pdev->driver->driver.name,
- pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
-
- lio_dev->port_id = eth_dev->data->port_id;
-
- if (lio_first_time_init(lio_dev, pdev)) {
- lio_dev_err(lio_dev, "Device init failed\n");
- return -EINVAL;
- }
-
- eth_dev->dev_ops = &liovf_eth_dev_ops;
- eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- lio_dev_err(lio_dev,
- "MAC addresses memory allocation failed\n");
- eth_dev->dev_ops = NULL;
- eth_dev->rx_pkt_burst = NULL;
- eth_dev->tx_pkt_burst = NULL;
- return -ENOMEM;
- }
-
- rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
- rte_wmb();
-
- lio_dev->port_configured = 0;
- /* Always allow unicast packets */
- lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
-
- return 0;
-}
-
-static int
-lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
- lio_eth_dev_init);
-}
-
-static int
-lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
-{
- return rte_eth_dev_pci_generic_remove(pci_dev,
- lio_eth_dev_uninit);
-}
-
-/* Set of PCI devices this driver supports */
-static const struct rte_pci_id pci_id_liovf_map[] = {
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
- { .vendor_id = 0, /* sentinel */ }
-};
-
-static struct rte_pci_driver rte_liovf_pmd = {
- .id_table = pci_id_liovf_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = lio_eth_dev_pci_probe,
- .remove = lio_eth_dev_pci_remove,
-};
-
-RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
deleted file mode 100644
index ece2b03858..0000000000
--- a/drivers/net/liquidio/lio_ethdev.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_ETHDEV_H_
-#define _LIO_ETHDEV_H_
-
-#include <stdint.h>
-
-#include "lio_struct.h"
-
-/* timeout to check link state updates from firmware in us */
-#define LIO_LSC_TIMEOUT 100000 /* 100000us (100ms) */
-#define LIO_MAX_CMD_TIMEOUT 10000 /* 10000ms (10s) */
-
-/* The max frame size with default MTU */
-#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
-
-#define LIO_DEV(_eth_dev) ((_eth_dev)->data->dev_private)
-
-/* LIO Response condition variable */
-struct lio_dev_ctrl_cmd {
- struct rte_eth_dev *eth_dev;
- uint64_t cond;
-};
-
-enum lio_bus_speed {
- LIO_LINK_SPEED_UNKNOWN = 0,
- LIO_LINK_SPEED_10000 = 10000,
- LIO_LINK_SPEED_25000 = 25000
-};
-
-struct octeon_if_cfg_info {
- uint64_t iqmask; /** mask for IQs enabled for the port */
- uint64_t oqmask; /** mask for OQs enabled for the port */
- struct octeon_link_info linfo; /** initial link information */
- char lio_firmware_version[LIO_FW_VERSION_LENGTH];
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_rx_stats {
- /* link-level stats */
- uint64_t total_rcvd;
- uint64_t bytes_rcvd;
- uint64_t total_bcst;
- uint64_t total_mcst;
- uint64_t runts;
- uint64_t ctl_rcvd;
- uint64_t fifo_err; /* Accounts for over/under-run of buffers */
- uint64_t dmac_drop;
- uint64_t fcs_err;
- uint64_t jabber_err;
- uint64_t l2_err;
- uint64_t frame_err;
-
- /* firmware stats */
- uint64_t fw_total_rcvd;
- uint64_t fw_total_fwd;
- uint64_t fw_total_fwd_bytes;
- uint64_t fw_err_pko;
- uint64_t fw_err_link;
- uint64_t fw_err_drop;
- uint64_t fw_rx_vxlan;
- uint64_t fw_rx_vxlan_err;
-
- /* LRO */
- uint64_t fw_lro_pkts; /* Number of packets that are LROed */
- uint64_t fw_lro_octs; /* Number of octets that are LROed */
- uint64_t fw_total_lro; /* Number of LRO packets formed */
- uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
- uint64_t fw_lro_aborts_port;
- uint64_t fw_lro_aborts_seq;
- uint64_t fw_lro_aborts_tsval;
- uint64_t fw_lro_aborts_timer;
- /* intrmod: packet forward rate */
- uint64_t fwd_rate;
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_tx_stats {
- /* link-level stats */
- uint64_t total_pkts_sent;
- uint64_t total_bytes_sent;
- uint64_t mcast_pkts_sent;
- uint64_t bcast_pkts_sent;
- uint64_t ctl_sent;
- uint64_t one_collision_sent; /* Packets sent after one collision */
- /* Packets sent after multiple collision */
- uint64_t multi_collision_sent;
- /* Packets not sent due to max collisions */
- uint64_t max_collision_fail;
- /* Packets not sent due to max deferrals */
- uint64_t max_deferral_fail;
- /* Accounts for over/under-run of buffers */
- uint64_t fifo_err;
- uint64_t runts;
- uint64_t total_collisions; /* Total number of collisions detected */
-
- /* firmware stats */
- uint64_t fw_total_sent;
- uint64_t fw_total_fwd;
- uint64_t fw_total_fwd_bytes;
- uint64_t fw_err_pko;
- uint64_t fw_err_link;
- uint64_t fw_err_drop;
- uint64_t fw_err_tso;
- uint64_t fw_tso; /* number of tso requests */
- uint64_t fw_tso_fwd; /* number of packets segmented in tso */
- uint64_t fw_tx_vxlan;
-};
-
-struct octeon_link_stats {
- struct octeon_rx_stats fromwire;
- struct octeon_tx_stats fromhost;
-};
-
-union lio_if_cfg {
- uint64_t if_cfg64;
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t base_queue : 16;
- uint64_t num_iqueues : 16;
- uint64_t num_oqueues : 16;
- uint64_t gmx_port_id : 8;
- uint64_t vf_id : 8;
-#else
- uint64_t vf_id : 8;
- uint64_t gmx_port_id : 8;
- uint64_t num_oqueues : 16;
- uint64_t num_iqueues : 16;
- uint64_t base_queue : 16;
-#endif
- } s;
-};
-
-struct lio_if_cfg_resp {
- uint64_t rh;
- struct octeon_if_cfg_info cfg_info;
- uint64_t status;
-};
-
-struct lio_link_stats_resp {
- uint64_t rh;
- struct octeon_link_stats link_stats;
- uint64_t status;
-};
-
-struct lio_link_status_resp {
- uint64_t rh;
- struct octeon_link_info link_info;
- uint64_t status;
-};
-
-struct lio_rss_set {
- struct param {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t flags : 16;
- uint64_t hashinfo : 32;
- uint64_t itablesize : 16;
- uint64_t hashkeysize : 16;
- uint64_t reserved : 48;
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t itablesize : 16;
- uint64_t hashinfo : 32;
- uint64_t flags : 16;
- uint64_t reserved : 48;
- uint64_t hashkeysize : 16;
-#endif
- } param;
-
- uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
- uint8_t key[LIO_RSS_MAX_KEY_SZ];
-};
-
-void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-#endif /* _LIO_ETHDEV_H_ */
diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
deleted file mode 100644
index f227827081..0000000000
--- a/drivers/net/liquidio/lio_logs.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_LOGS_H_
-#define _LIO_LOGS_H_
-
-extern int lio_logtype_driver;
-#define lio_dev_printf(lio_dev, level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, lio_logtype_driver, \
- "%s" fmt, (lio_dev)->dev_string, ##args)
-
-#define lio_dev_info(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
-
-#define lio_dev_err(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
-
-extern int lio_logtype_init;
-#define PMD_INIT_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, lio_logtype_init, \
- fmt, ## args)
-
-/* Enable these through config options */
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
-
-#define lio_dev_dbg(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_RX
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_TX
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
-#define PMD_MBOX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
-#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define PMD_REGS_LOG(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
-#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
-
-#endif /* _LIO_LOGS_H_ */
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
deleted file mode 100644
index e09798ddd7..0000000000
--- a/drivers/net/liquidio/lio_rxtx.c
+++ /dev/null
@@ -1,1804 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-#define LIO_MAX_SG 12
-/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
-#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
-#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
-
-static void
-lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
-{
- uint32_t count = 0;
-
- do {
- count += droq->buffer_size;
- } while (count < LIO_MAX_RX_PKTLEN);
-}
-
-static void
-lio_droq_reset_indices(struct lio_droq *droq)
-{
- droq->read_idx = 0;
- droq->write_idx = 0;
- droq->refill_idx = 0;
- droq->refill_count = 0;
- rte_atomic64_set(&droq->pkts_pending, 0);
-}
-
-static void
-lio_droq_destroy_ring_buffers(struct lio_droq *droq)
-{
- uint32_t i;
-
- for (i = 0; i < droq->nb_desc; i++) {
- if (droq->recv_buf_list[i].buffer) {
- rte_pktmbuf_free((struct rte_mbuf *)
- droq->recv_buf_list[i].buffer);
- droq->recv_buf_list[i].buffer = NULL;
- }
- }
-
- lio_droq_reset_indices(droq);
-}
-
-static int
-lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
- struct lio_droq *droq)
-{
- struct lio_droq_desc *desc_ring = droq->desc_ring;
- uint32_t i;
- void *buf;
-
- for (i = 0; i < droq->nb_desc; i++) {
- buf = rte_pktmbuf_alloc(droq->mpool);
- if (buf == NULL) {
- lio_dev_err(lio_dev, "buffer alloc failed\n");
- droq->stats.rx_alloc_failure++;
- lio_droq_destroy_ring_buffers(droq);
- return -ENOMEM;
- }
-
- droq->recv_buf_list[i].buffer = buf;
- droq->info_list[i].length = 0;
-
- /* map ring buffers into memory */
- desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
- desc_ring[i].buffer_ptr =
- lio_map_ring(droq->recv_buf_list[i].buffer);
- }
-
- lio_droq_reset_indices(droq);
-
- lio_droq_compute_max_packet_bufs(droq);
-
- return 0;
-}
-
-static void
-lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
-{
- const struct rte_memzone *mz_tmp;
- int ret = 0;
-
- if (mz == NULL) {
- lio_dev_err(lio_dev, "Memzone NULL\n");
- return;
- }
-
- mz_tmp = rte_memzone_lookup(mz->name);
- if (mz_tmp == NULL) {
- lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
- return;
- }
-
- ret = rte_memzone_free(mz);
- if (ret)
- lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
-}
-
-/**
- * Frees the space for descriptor ring for the droq.
- *
- * @param lio_dev - pointer to the lio device structure
- * @param q_no - droq no.
- */
-static void
-lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
-{
- struct lio_droq *droq = lio_dev->droq[q_no];
-
- lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
- lio_droq_destroy_ring_buffers(droq);
- rte_free(droq->recv_buf_list);
- droq->recv_buf_list = NULL;
- lio_dma_zone_free(lio_dev, droq->info_mz);
- lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
-
- memset(droq, 0, LIO_DROQ_SIZE);
-}
-
-static void *
-lio_alloc_info_buffer(struct lio_device *lio_dev,
- struct lio_droq *droq, unsigned int socket_id)
-{
- droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "info_list", droq->q_no,
- (droq->nb_desc *
- LIO_DROQ_INFO_SIZE),
- RTE_CACHE_LINE_SIZE,
- socket_id);
-
- if (droq->info_mz == NULL)
- return NULL;
-
- droq->info_list_dma = droq->info_mz->iova;
- droq->info_alloc_size = droq->info_mz->len;
- droq->info_base_addr = (size_t)droq->info_mz->addr;
-
- return droq->info_mz->addr;
-}
-
-/**
- * Allocates space for the descriptor ring for the droq and
- * sets the base addr, num desc etc in Octeon registers.
- *
- * @param lio_dev - pointer to the lio device structure
- * @param q_no - droq no.
- * @param app_ctx - pointer to application context
- * @return Success: 0 Failure: -1
- */
-static int
-lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
- uint32_t num_descs, uint32_t desc_size,
- struct rte_mempool *mpool, unsigned int socket_id)
-{
- uint32_t c_refill_threshold;
- uint32_t desc_ring_size;
- struct lio_droq *droq;
-
- lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
- droq = lio_dev->droq[q_no];
- droq->lio_dev = lio_dev;
- droq->q_no = q_no;
- droq->mpool = mpool;
-
- c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
-
- droq->nb_desc = num_descs;
- droq->buffer_size = desc_size;
-
- desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
- droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "droq", q_no,
- desc_ring_size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
-
- if (droq->desc_ring_mz == NULL) {
- lio_dev_err(lio_dev,
- "Output queue %d ring alloc failed\n", q_no);
- return -1;
- }
-
- droq->desc_ring_dma = droq->desc_ring_mz->iova;
- droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
-
- lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
- q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
- lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
- droq->nb_desc);
-
- droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
- if (droq->info_list == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
- goto init_droq_fail;
- }
-
- droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
- (droq->nb_desc *
- LIO_DROQ_RECVBUF_SIZE),
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (droq->recv_buf_list == NULL) {
- lio_dev_err(lio_dev,
- "Output queue recv buf list alloc failed\n");
- goto init_droq_fail;
- }
-
- if (lio_droq_setup_ring_buffers(lio_dev, droq))
- goto init_droq_fail;
-
- droq->refill_threshold = c_refill_threshold;
-
- rte_spinlock_init(&droq->lock);
-
- lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
-
- lio_dev->io_qmask.oq |= (1ULL << q_no);
-
- return 0;
-
-init_droq_fail:
- lio_delete_droq(lio_dev, q_no);
-
- return -1;
-}
-
-int
-lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
- int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
-{
- struct lio_droq *droq;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Allocate the DS for the new droq. */
- droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (droq == NULL)
- return -ENOMEM;
-
- lio_dev->droq[oq_no] = droq;
-
- /* Initialize the Droq */
- if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
- socket_id)) {
- lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
- rte_free(lio_dev->droq[oq_no]);
- lio_dev->droq[oq_no] = NULL;
- return -ENOMEM;
- }
-
- lio_dev->num_oqs++;
-
- lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
-
- /* Send credit for octeon output queues. credits are always
- * sent after the output queue is enabled.
- */
- rte_write32(lio_dev->droq[oq_no]->nb_desc,
- lio_dev->droq[oq_no]->pkts_credit_reg);
- rte_wmb();
-
- return 0;
-}
-
-static inline uint32_t
-lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
-{
- uint32_t buf_cnt = 0;
-
- while (total_len > (buf_size * buf_cnt))
- buf_cnt++;
-
- return buf_cnt;
-}
-
-/* If we were not able to refill all buffers, try to move around
- * the buffers that were not dispatched.
- */
-static inline uint32_t
-lio_droq_refill_pullup_descs(struct lio_droq *droq,
- struct lio_droq_desc *desc_ring)
-{
- uint32_t refill_index = droq->refill_idx;
- uint32_t desc_refilled = 0;
-
- while (refill_index != droq->read_idx) {
- if (droq->recv_buf_list[refill_index].buffer) {
- droq->recv_buf_list[droq->refill_idx].buffer =
- droq->recv_buf_list[refill_index].buffer;
- desc_ring[droq->refill_idx].buffer_ptr =
- desc_ring[refill_index].buffer_ptr;
- droq->recv_buf_list[refill_index].buffer = NULL;
- desc_ring[refill_index].buffer_ptr = 0;
- do {
- droq->refill_idx = lio_incr_index(
- droq->refill_idx, 1,
- droq->nb_desc);
- desc_refilled++;
- droq->refill_count--;
- } while (droq->recv_buf_list[droq->refill_idx].buffer);
- }
- refill_index = lio_incr_index(refill_index, 1,
- droq->nb_desc);
- } /* while */
-
- return desc_refilled;
-}
-
-/* lio_droq_refill
- *
- * @param droq - droq in which descriptors require new buffers.
- *
- * Description:
- * Called during normal DROQ processing in interrupt mode or by the poll
- * thread to refill the descriptors from which buffers were dispatched
- * to upper layers. Attempts to allocate new buffers. If that fails, moves
- * up buffers (that were not dispatched) to form a contiguous ring.
- *
- * Returns:
- * No of descriptors refilled.
- *
- * Locks:
- * This routine is called with droq->lock held.
- */
-static uint32_t
-lio_droq_refill(struct lio_droq *droq)
-{
- struct lio_droq_desc *desc_ring;
- uint32_t desc_refilled = 0;
- void *buf = NULL;
-
- desc_ring = droq->desc_ring;
-
- while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
- /* If a valid buffer exists (happens if there is no dispatch),
- * reuse the buffer, else allocate.
- */
- if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
- buf = rte_pktmbuf_alloc(droq->mpool);
- /* If a buffer could not be allocated, no point in
- * continuing
- */
- if (buf == NULL) {
- droq->stats.rx_alloc_failure++;
- break;
- }
-
- droq->recv_buf_list[droq->refill_idx].buffer = buf;
- }
-
- desc_ring[droq->refill_idx].buffer_ptr =
- lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
- /* Reset any previous values in the length field. */
- droq->info_list[droq->refill_idx].length = 0;
-
- droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
- droq->nb_desc);
- desc_refilled++;
- droq->refill_count--;
- }
-
- if (droq->refill_count)
- desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
-
- /* if droq->refill_count
- * The refill count would not change in pass two. We only moved buffers
- * to close the gap in the ring, but we would still have the same no. of
- * buffers to refill.
- */
- return desc_refilled;
-}
-
-static int
-lio_droq_fast_process_packet(struct lio_device *lio_dev,
- struct lio_droq *droq,
- struct rte_mbuf **rx_pkts)
-{
- struct rte_mbuf *nicbuf = NULL;
- struct lio_droq_info *info;
- uint32_t total_len = 0;
- int data_total_len = 0;
- uint32_t pkt_len = 0;
- union octeon_rh *rh;
- int data_pkts = 0;
-
- info = &droq->info_list[droq->read_idx];
- lio_swap_8B_data((uint64_t *)info, 2);
-
- if (!info->length)
- return -1;
-
- /* Len of resp hdr in included in the received data len. */
- info->length -= OCTEON_RH_SIZE;
- rh = &info->rh;
-
- total_len += (uint32_t)info->length;
-
- if (lio_opcode_slow_path(rh)) {
- uint32_t buf_cnt;
-
- buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
- (uint32_t)info->length);
- droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
- droq->nb_desc);
- droq->refill_count += buf_cnt;
- } else {
- if (info->length <= droq->buffer_size) {
- if (rh->r_dh.has_hash)
- pkt_len = (uint32_t)(info->length - 8);
- else
- pkt_len = (uint32_t)info->length;
-
- nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
- droq->recv_buf_list[droq->read_idx].buffer = NULL;
- droq->read_idx = lio_incr_index(
- droq->read_idx, 1,
- droq->nb_desc);
- droq->refill_count++;
-
- if (likely(nicbuf != NULL)) {
- /* We don't have a way to pass flags yet */
- nicbuf->ol_flags = 0;
- if (rh->r_dh.has_hash) {
- uint64_t *hash_ptr;
-
- nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- hash_ptr = rte_pktmbuf_mtod(nicbuf,
- uint64_t *);
- lio_swap_8B_data(hash_ptr, 1);
- nicbuf->hash.rss = (uint32_t)*hash_ptr;
- nicbuf->data_off += 8;
- }
-
- nicbuf->pkt_len = pkt_len;
- nicbuf->data_len = pkt_len;
- nicbuf->port = lio_dev->port_id;
- /* Store the mbuf */
- rx_pkts[data_pkts++] = nicbuf;
- data_total_len += pkt_len;
- }
-
- /* Prefetch buffer pointers when on a cache line
- * boundary
- */
- if ((droq->read_idx & 3) == 0) {
- rte_prefetch0(
- &droq->recv_buf_list[droq->read_idx]);
- rte_prefetch0(
- &droq->info_list[droq->read_idx]);
- }
- } else {
- struct rte_mbuf *first_buf = NULL;
- struct rte_mbuf *last_buf = NULL;
-
- while (pkt_len < info->length) {
- int cpy_len = 0;
-
- cpy_len = ((pkt_len + droq->buffer_size) >
- info->length)
- ? ((uint32_t)info->length -
- pkt_len)
- : droq->buffer_size;
-
- nicbuf =
- droq->recv_buf_list[droq->read_idx].buffer;
- droq->recv_buf_list[droq->read_idx].buffer =
- NULL;
-
- if (likely(nicbuf != NULL)) {
- /* Note the first seg */
- if (!pkt_len)
- first_buf = nicbuf;
-
- nicbuf->port = lio_dev->port_id;
- /* We don't have a way to pass
- * flags yet
- */
- nicbuf->ol_flags = 0;
- if ((!pkt_len) && (rh->r_dh.has_hash)) {
- uint64_t *hash_ptr;
-
- nicbuf->ol_flags |=
- RTE_MBUF_F_RX_RSS_HASH;
- hash_ptr = rte_pktmbuf_mtod(
- nicbuf, uint64_t *);
- lio_swap_8B_data(hash_ptr, 1);
- nicbuf->hash.rss =
- (uint32_t)*hash_ptr;
- nicbuf->data_off += 8;
- nicbuf->pkt_len = cpy_len - 8;
- nicbuf->data_len = cpy_len - 8;
- } else {
- nicbuf->pkt_len = cpy_len;
- nicbuf->data_len = cpy_len;
- }
-
- if (pkt_len)
- first_buf->nb_segs++;
-
- if (last_buf)
- last_buf->next = nicbuf;
-
- last_buf = nicbuf;
- } else {
- PMD_RX_LOG(lio_dev, ERR, "no buf\n");
- }
-
- pkt_len += cpy_len;
- droq->read_idx = lio_incr_index(
- droq->read_idx,
- 1, droq->nb_desc);
- droq->refill_count++;
-
- /* Prefetch buffer pointers when on a
- * cache line boundary
- */
- if ((droq->read_idx & 3) == 0) {
- rte_prefetch0(&droq->recv_buf_list
- [droq->read_idx]);
-
- rte_prefetch0(
- &droq->info_list[droq->read_idx]);
- }
- }
- rx_pkts[data_pkts++] = first_buf;
- if (rh->r_dh.has_hash)
- data_total_len += (pkt_len - 8);
- else
- data_total_len += pkt_len;
- }
-
- /* Inform upper layer about packet checksum verification */
- struct rte_mbuf *m = rx_pkts[data_pkts - 1];
-
- if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
- m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
- if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
- m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
-
- if (droq->refill_count >= droq->refill_threshold) {
- int desc_refilled = lio_droq_refill(droq);
-
- /* Flush the droq descriptor data to memory to be sure
- * that when we update the credits the data in memory is
- * accurate.
- */
- rte_wmb();
- rte_write32(desc_refilled, droq->pkts_credit_reg);
- /* make sure mmio write completes */
- rte_wmb();
- }
-
- info->length = 0;
- info->rh.rh64 = 0;
-
- droq->stats.pkts_received++;
- droq->stats.rx_pkts_received += data_pkts;
- droq->stats.rx_bytes_received += data_total_len;
- droq->stats.bytes_received += total_len;
-
- return data_pkts;
-}
-
-static uint32_t
-lio_droq_fast_process_packets(struct lio_device *lio_dev,
- struct lio_droq *droq,
- struct rte_mbuf **rx_pkts,
- uint32_t pkts_to_process)
-{
- int ret, data_pkts = 0;
- uint32_t pkt;
-
- for (pkt = 0; pkt < pkts_to_process; pkt++) {
- ret = lio_droq_fast_process_packet(lio_dev, droq,
- &rx_pkts[data_pkts]);
- if (ret < 0) {
- lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
- lio_dev->port_id, droq->q_no,
- droq->read_idx, pkts_to_process);
- break;
- }
- data_pkts += ret;
- }
-
- rte_atomic64_sub(&droq->pkts_pending, pkt);
-
- return data_pkts;
-}
-
-static inline uint32_t
-lio_droq_check_hw_for_pkts(struct lio_droq *droq)
-{
- uint32_t last_count;
- uint32_t pkt_count;
-
- pkt_count = rte_read32(droq->pkts_sent_reg);
-
- last_count = pkt_count - droq->pkt_count;
- droq->pkt_count = pkt_count;
-
- if (last_count)
- rte_atomic64_add(&droq->pkts_pending, last_count);
-
- return last_count;
-}
-
-uint16_t
-lio_dev_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t budget)
-{
- struct lio_droq *droq = rx_queue;
- struct lio_device *lio_dev = droq->lio_dev;
- uint32_t pkts_processed = 0;
- uint32_t pkt_count = 0;
-
- lio_droq_check_hw_for_pkts(droq);
-
- pkt_count = rte_atomic64_read(&droq->pkts_pending);
- if (!pkt_count)
- return 0;
-
- if (pkt_count > budget)
- pkt_count = budget;
-
- /* Grab the lock */
- rte_spinlock_lock(&droq->lock);
- pkts_processed = lio_droq_fast_process_packets(lio_dev,
- droq, rx_pkts,
- pkt_count);
-
- if (droq->pkt_count) {
- rte_write32(droq->pkt_count, droq->pkts_sent_reg);
- droq->pkt_count = 0;
- }
-
- /* Release the spin lock */
- rte_spinlock_unlock(&droq->lock);
-
- return pkts_processed;
-}
-
-void
-lio_delete_droq_queue(struct lio_device *lio_dev,
- int oq_no)
-{
- lio_delete_droq(lio_dev, oq_no);
- lio_dev->num_oqs--;
- rte_free(lio_dev->droq[oq_no]);
- lio_dev->droq[oq_no] = NULL;
-}
-
-/**
- * lio_init_instr_queue()
- * @param lio_dev - pointer to the lio device structure.
- * @param txpciq - queue to be initialized.
- *
- * Called at driver init time for each input queue. iq_conf has the
- * configuration parameters for the queue.
- *
- * @return Success: 0 Failure: -1
- */
-static int
-lio_init_instr_queue(struct lio_device *lio_dev,
- union octeon_txpciq txpciq,
- uint32_t num_descs, unsigned int socket_id)
-{
- uint32_t iq_no = (uint32_t)txpciq.s.q_no;
- struct lio_instr_queue *iq;
- uint32_t instr_type;
- uint32_t q_size;
-
- instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
-
- q_size = instr_type * num_descs;
- iq = lio_dev->instr_queue[iq_no];
- iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "instr_queue", iq_no, q_size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (iq->iq_mz == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
- iq_no);
- return -1;
- }
-
- iq->base_addr_dma = iq->iq_mz->iova;
- iq->base_addr = (uint8_t *)iq->iq_mz->addr;
-
- iq->nb_desc = num_descs;
-
- /* Initialize a list to holds requests that have been posted to Octeon
- * but has yet to be fetched by octeon
- */
- iq->request_list = rte_zmalloc_socket("request_list",
- sizeof(*iq->request_list) *
- num_descs,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (iq->request_list == NULL) {
- lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
- iq_no);
- lio_dma_zone_free(lio_dev, iq->iq_mz);
- return -1;
- }
-
- lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
- iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
- iq->nb_desc);
-
- iq->lio_dev = lio_dev;
- iq->txpciq.txpciq64 = txpciq.txpciq64;
- iq->fill_cnt = 0;
- iq->host_write_index = 0;
- iq->lio_read_index = 0;
- iq->flush_index = 0;
-
- rte_atomic64_set(&iq->instr_pending, 0);
-
- /* Initialize the spinlock for this instruction queue */
- rte_spinlock_init(&iq->lock);
- rte_spinlock_init(&iq->post_lock);
-
- rte_atomic64_clear(&iq->iq_flush_running);
-
- lio_dev->io_qmask.iq |= (1ULL << iq_no);
-
- /* Set the 32B/64B mode for each input queue */
- lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
- iq->iqcmd_64B = (instr_type == 64);
-
- lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
-
- return 0;
-}
-
-int
-lio_setup_instr_queue0(struct lio_device *lio_dev)
-{
- union octeon_txpciq txpciq;
- uint32_t num_descs = 0;
- uint32_t iq_no = 0;
-
- num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
-
- lio_dev->num_iqs = 0;
-
- lio_dev->instr_queue[0] = rte_zmalloc(NULL,
- sizeof(struct lio_instr_queue), 0);
- if (lio_dev->instr_queue[0] == NULL)
- return -ENOMEM;
-
- lio_dev->instr_queue[0]->q_index = 0;
- lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
- txpciq.txpciq64 = 0;
- txpciq.s.q_no = iq_no;
- txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
- txpciq.s.use_qpg = 0;
- txpciq.s.qpg = 0;
- if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
- rte_free(lio_dev->instr_queue[0]);
- lio_dev->instr_queue[0] = NULL;
- return -1;
- }
-
- lio_dev->num_iqs++;
-
- return 0;
-}
-
-/**
- * lio_delete_instr_queue()
- * @param lio_dev - pointer to the lio device structure.
- * @param iq_no - queue to be deleted.
- *
- * Called at driver unload time for each input queue. Deletes all
- * allocated resources for the input queue.
- */
-static void
-lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-
- rte_free(iq->request_list);
- iq->request_list = NULL;
- lio_dma_zone_free(lio_dev, iq->iq_mz);
-}
-
-void
-lio_free_instr_queue0(struct lio_device *lio_dev)
-{
- lio_delete_instr_queue(lio_dev, 0);
- rte_free(lio_dev->instr_queue[0]);
- lio_dev->instr_queue[0] = NULL;
- lio_dev->num_iqs--;
-}
-
-/* Return 0 on success, -1 on failure */
-int
-lio_setup_iq(struct lio_device *lio_dev, int q_index,
- union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
- unsigned int socket_id)
-{
- uint32_t iq_no = (uint32_t)txpciq.s.q_no;
-
- lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
- sizeof(struct lio_instr_queue),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lio_dev->instr_queue[iq_no] == NULL)
- return -1;
-
- lio_dev->instr_queue[iq_no]->q_index = q_index;
- lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
-
- if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
- rte_free(lio_dev->instr_queue[iq_no]);
- lio_dev->instr_queue[iq_no] = NULL;
- return -1;
- }
-
- lio_dev->num_iqs++;
-
- return 0;
-}
-
-int
-lio_wait_for_instr_fetch(struct lio_device *lio_dev)
-{
- int pending, instr_cnt;
- int i, retry = 1000;
-
- do {
- instr_cnt = 0;
-
- for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
- if (!(lio_dev->io_qmask.iq & (1ULL << i)))
- continue;
-
- if (lio_dev->instr_queue[i] == NULL)
- break;
-
- pending = rte_atomic64_read(
- &lio_dev->instr_queue[i]->instr_pending);
- if (pending)
- lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
-
- instr_cnt += pending;
- }
-
- if (instr_cnt == 0)
- break;
-
- rte_delay_ms(1);
-
- } while (retry-- && instr_cnt);
-
- return instr_cnt;
-}
-
-static inline void
-lio_ring_doorbell(struct lio_device *lio_dev,
- struct lio_instr_queue *iq)
-{
- if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
- rte_write32(iq->fill_cnt, iq->doorbell_reg);
- /* make sure doorbell write goes through */
- rte_wmb();
- iq->fill_cnt = 0;
- }
-}
-
-static inline void
-copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
-{
- uint8_t *iqptr, cmdsize;
-
- cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
- iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
-
- rte_memcpy(iqptr, cmd, cmdsize);
-}
-
-static inline struct lio_iq_post_status
-post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
-{
- struct lio_iq_post_status st;
-
- st.status = LIO_IQ_SEND_OK;
-
- /* This ensures that the read index does not wrap around to the same
- * position if queue gets full before Octeon could fetch any instr.
- */
- if (rte_atomic64_read(&iq->instr_pending) >=
- (int32_t)(iq->nb_desc - 1)) {
- st.status = LIO_IQ_SEND_FAILED;
- st.index = -1;
- return st;
- }
-
- if (rte_atomic64_read(&iq->instr_pending) >=
- (int32_t)(iq->nb_desc - 2))
- st.status = LIO_IQ_SEND_STOP;
-
- copy_cmd_into_iq(iq, cmd);
-
- /* "index" is returned, host_write_index is modified. */
- st.index = iq->host_write_index;
- iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
- iq->nb_desc);
- iq->fill_cnt++;
-
- /* Flush the command into memory. We need to be sure the data is in
- * memory before indicating that the instruction is pending.
- */
- rte_wmb();
-
- rte_atomic64_inc(&iq->instr_pending);
-
- return st;
-}
-
-static inline void
-lio_add_to_request_list(struct lio_instr_queue *iq,
- int idx, void *buf, int reqtype)
-{
- iq->request_list[idx].buf = buf;
- iq->request_list[idx].reqtype = reqtype;
-}
-
-static inline void
-lio_free_netsgbuf(void *buf)
-{
- struct lio_buf_free_info *finfo = buf;
- struct lio_device *lio_dev = finfo->lio_dev;
- struct rte_mbuf *m = finfo->mbuf;
- struct lio_gather *g = finfo->g;
- uint8_t iq = finfo->iq_no;
-
- /* This will take care of multiple segments also */
- rte_pktmbuf_free(m);
-
- rte_spinlock_lock(&lio_dev->glist_lock[iq]);
- STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
- rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
- rte_free(finfo);
-}
-
-/* Can only run in process context */
-static int
-lio_process_iq_request_list(struct lio_device *lio_dev,
- struct lio_instr_queue *iq)
-{
- struct octeon_instr_irh *irh = NULL;
- uint32_t old = iq->flush_index;
- struct lio_soft_command *sc;
- uint32_t inst_count = 0;
- int reqtype;
- void *buf;
-
- while (old != iq->lio_read_index) {
- reqtype = iq->request_list[old].reqtype;
- buf = iq->request_list[old].buf;
-
- if (reqtype == LIO_REQTYPE_NONE)
- goto skip_this;
-
- switch (reqtype) {
- case LIO_REQTYPE_NORESP_NET:
- rte_pktmbuf_free((struct rte_mbuf *)buf);
- break;
- case LIO_REQTYPE_NORESP_NET_SG:
- lio_free_netsgbuf(buf);
- break;
- case LIO_REQTYPE_SOFT_COMMAND:
- sc = buf;
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- if (irh->rflag) {
- /* We're expecting a response from Octeon.
- * It's up to lio_process_ordered_list() to
- * process sc. Add sc to the ordered soft
- * command response list because we expect
- * a response from Octeon.
- */
- rte_spinlock_lock(&lio_dev->response_list.lock);
- rte_atomic64_inc(
- &lio_dev->response_list.pending_req_count);
- STAILQ_INSERT_TAIL(
- &lio_dev->response_list.head,
- &sc->node, entries);
- rte_spinlock_unlock(
- &lio_dev->response_list.lock);
- } else {
- if (sc->callback) {
- /* This callback must not sleep */
- sc->callback(LIO_REQUEST_DONE,
- sc->callback_arg);
- }
- }
- break;
- default:
- lio_dev_err(lio_dev,
- "Unknown reqtype: %d buf: %p at idx %d\n",
- reqtype, buf, old);
- }
-
- iq->request_list[old].buf = NULL;
- iq->request_list[old].reqtype = 0;
-
-skip_this:
- inst_count++;
- old = lio_incr_index(old, 1, iq->nb_desc);
- }
-
- iq->flush_index = old;
-
- return inst_count;
-}
-
-static void
-lio_update_read_index(struct lio_instr_queue *iq)
-{
- uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
- uint32_t last_done;
-
- last_done = pkt_in_done - iq->pkt_in_done;
- iq->pkt_in_done = pkt_in_done;
-
- /* Add last_done and modulo with the IQ size to get new index */
- iq->lio_read_index = (iq->lio_read_index +
- (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
- iq->nb_desc;
-}
-
-int
-lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
-{
- uint32_t inst_processed = 0;
- int tx_done = 1;
-
- if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
- return tx_done;
-
- rte_spinlock_lock(&iq->lock);
-
- lio_update_read_index(iq);
-
- do {
- /* Process any outstanding IQ packets. */
- if (iq->flush_index == iq->lio_read_index)
- break;
-
- inst_processed = lio_process_iq_request_list(lio_dev, iq);
-
- if (inst_processed) {
- rte_atomic64_sub(&iq->instr_pending, inst_processed);
- iq->stats.instr_processed += inst_processed;
- }
-
- inst_processed = 0;
-
- } while (1);
-
- rte_spinlock_unlock(&iq->lock);
-
- rte_atomic64_clear(&iq->iq_flush_running);
-
- return tx_done;
-}
-
-static int
-lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
- void *buf, uint32_t datasize, uint32_t reqtype)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- struct lio_iq_post_status st;
-
- rte_spinlock_lock(&iq->post_lock);
-
- st = post_command2(iq, cmd);
-
- if (st.status != LIO_IQ_SEND_FAILED) {
- lio_add_to_request_list(iq, st.index, buf, reqtype);
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
- datasize);
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
-
- lio_ring_doorbell(lio_dev, iq);
- } else {
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
- }
-
- rte_spinlock_unlock(&iq->post_lock);
-
- return st.status;
-}
-
-void
-lio_prepare_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc, uint8_t opcode,
- uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
- uint64_t ossp1)
-{
- struct octeon_instr_pki_ih3 *pki_ih3;
- struct octeon_instr_ih3 *ih3;
- struct octeon_instr_irh *irh;
- struct octeon_instr_rdp *rdp;
-
- RTE_ASSERT(opcode <= 15);
- RTE_ASSERT(subcode <= 127);
-
- ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
-
- ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
-
- pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
-
- pki_ih3->w = 1;
- pki_ih3->raw = 1;
- pki_ih3->utag = 1;
- pki_ih3->uqpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
- pki_ih3->utt = 1;
-
- pki_ih3->tag = LIO_CONTROL;
- pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
- pki_ih3->qpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
- pki_ih3->pm = 0x7;
- pki_ih3->sl = 8;
-
- if (sc->datasize)
- ih3->dlengsz = sc->datasize;
-
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- irh->opcode = opcode;
- irh->subcode = subcode;
-
- /* opcode/subcode specific parameters (ossp) */
- irh->ossp = irh_ossp;
- sc->cmd.cmd3.ossp[0] = ossp0;
- sc->cmd.cmd3.ossp[1] = ossp1;
-
- if (sc->rdatasize) {
- rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
- rdp->pcie_port = lio_dev->pcie_port;
- rdp->rlen = sc->rdatasize;
- irh->rflag = 1;
- /* PKI IH3 */
- ih3->fsz = OCTEON_SOFT_CMD_RESP_IH3;
- } else {
- irh->rflag = 0;
- /* PKI IH3 */
- ih3->fsz = OCTEON_PCI_CMD_O3;
- }
-}
-
-int
-lio_send_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc)
-{
- struct octeon_instr_ih3 *ih3;
- struct octeon_instr_irh *irh;
- uint32_t len = 0;
-
- ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
- if (ih3->dlengsz) {
- RTE_ASSERT(sc->dmadptr);
- sc->cmd.cmd3.dptr = sc->dmadptr;
- }
-
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- if (irh->rflag) {
- RTE_ASSERT(sc->dmarptr);
- RTE_ASSERT(sc->status_word != NULL);
- *sc->status_word = LIO_COMPLETION_WORD_INIT;
- sc->cmd.cmd3.rptr = sc->dmarptr;
- }
-
- len = (uint32_t)ih3->dlengsz;
-
- if (sc->wait_time)
- sc->timeout = lio_uptime + sc->wait_time;
-
- return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
- LIO_REQTYPE_SOFT_COMMAND);
-}
-
-int
-lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
-{
- char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
- uint16_t buf_size;
-
- buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
- snprintf(sc_pool_name, sizeof(sc_pool_name),
- "lio_sc_pool_%u", lio_dev->port_id);
- lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
- LIO_MAX_SOFT_COMMAND_BUFFERS,
- 0, 0, buf_size, SOCKET_ID_ANY);
- return 0;
-}
-
-void
-lio_free_sc_buffer_pool(struct lio_device *lio_dev)
-{
- rte_mempool_free(lio_dev->sc_buf_pool);
-}
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
- uint32_t rdatasize, uint32_t ctxsize)
-{
- uint32_t offset = sizeof(struct lio_soft_command);
- struct lio_soft_command *sc;
- struct rte_mbuf *m;
- uint64_t dma_addr;
-
- RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
- LIO_SOFT_COMMAND_BUFFER_SIZE);
-
- m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
- if (m == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
- return NULL;
- }
-
- /* set rte_mbuf data size and there is only 1 segment */
- m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
- m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
-
- /* use rte_mbuf buffer for soft command */
- sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
- memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
- sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
- sc->dma_addr = rte_mbuf_data_iova(m);
- sc->mbuf = m;
-
- dma_addr = sc->dma_addr;
-
- if (ctxsize) {
- sc->ctxptr = (uint8_t *)sc + offset;
- sc->ctxsize = ctxsize;
- }
-
- /* Start data at 128 byte boundary */
- offset = (offset + ctxsize + 127) & 0xffffff80;
-
- if (datasize) {
- sc->virtdptr = (uint8_t *)sc + offset;
- sc->dmadptr = dma_addr + offset;
- sc->datasize = datasize;
- }
-
- /* Start rdata at 128 byte boundary */
- offset = (offset + datasize + 127) & 0xffffff80;
-
- if (rdatasize) {
- RTE_ASSERT(rdatasize >= 16);
- sc->virtrptr = (uint8_t *)sc + offset;
- sc->dmarptr = dma_addr + offset;
- sc->rdatasize = rdatasize;
- sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
- rdatasize - 8);
- }
-
- return sc;
-}
-
-void
-lio_free_soft_command(struct lio_soft_command *sc)
-{
- rte_pktmbuf_free(sc->mbuf);
-}
-
-void
-lio_setup_response_list(struct lio_device *lio_dev)
-{
- STAILQ_INIT(&lio_dev->response_list.head);
- rte_spinlock_init(&lio_dev->response_list.lock);
- rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
-}
-
-int
-lio_process_ordered_list(struct lio_device *lio_dev)
-{
- int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
- struct lio_response_list *ordered_sc_list;
- struct lio_soft_command *sc;
- int request_complete = 0;
- uint64_t status64;
- uint32_t status;
-
- ordered_sc_list = &lio_dev->response_list;
-
- do {
- rte_spinlock_lock(&ordered_sc_list->lock);
-
- if (STAILQ_EMPTY(&ordered_sc_list->head)) {
- /* ordered_sc_list is empty; there is
- * nothing to process
- */
- rte_spinlock_unlock(&ordered_sc_list->lock);
- return -1;
- }
-
- sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
- struct lio_soft_command, node);
-
- status = LIO_REQUEST_PENDING;
-
- /* check if octeon has finished DMA'ing a response
- * to where rptr is pointing to
- */
- status64 = *sc->status_word;
-
- if (status64 != LIO_COMPLETION_WORD_INIT) {
- /* This logic ensures that all 64b have been written.
- * 1. check byte 0 for non-FF
- * 2. if non-FF, then swap result from BE to host order
- * 3. check byte 7 (swapped to 0) for non-FF
- * 4. if non-FF, use the low 32-bit status code
- * 5. if either byte 0 or byte 7 is FF, don't use status
- */
- if ((status64 & 0xff) != 0xff) {
- lio_swap_8B_data(&status64, 1);
- if (((status64 & 0xff) != 0xff)) {
- /* retrieve 16-bit firmware status */
- status = (uint32_t)(status64 &
- 0xffffULL);
- if (status) {
- status =
- LIO_FIRMWARE_STATUS_CODE(
- status);
- } else {
- /* i.e. no error */
- status = LIO_REQUEST_DONE;
- }
- }
- }
- } else if ((sc->timeout && lio_check_timeout(lio_uptime,
- sc->timeout))) {
- lio_dev_err(lio_dev,
- "cmd failed, timeout (%ld, %ld)\n",
- (long)lio_uptime, (long)sc->timeout);
- status = LIO_REQUEST_TIMEOUT;
- }
-
- if (status != LIO_REQUEST_PENDING) {
- /* we have received a response or we have timed out.
- * remove node from linked list
- */
- STAILQ_REMOVE(&ordered_sc_list->head,
- &sc->node, lio_stailq_node, entries);
- rte_atomic64_dec(
- &lio_dev->response_list.pending_req_count);
- rte_spinlock_unlock(&ordered_sc_list->lock);
-
- if (sc->callback)
- sc->callback(status, sc->callback_arg);
-
- request_complete++;
- } else {
- /* no response yet */
- request_complete = 0;
- rte_spinlock_unlock(&ordered_sc_list->lock);
- }
-
- /* If we hit the Max Ordered requests to process every loop,
- * we quit and let this function be invoked the next time
- * the poll thread runs to process the remaining requests.
- * This function can take up the entire CPU if there is
- * no upper limit to the requests processed.
- */
- if (request_complete >= resp_to_process)
- break;
- } while (request_complete);
-
- return 0;
-}
-
-static inline struct lio_stailq_node *
-list_delete_first_node(struct lio_stailq_head *head)
-{
- struct lio_stailq_node *node;
-
- if (STAILQ_EMPTY(head))
- node = NULL;
- else
- node = STAILQ_FIRST(head);
-
- if (node)
- STAILQ_REMOVE(head, node, lio_stailq_node, entries);
-
- return node;
-}
-
-void
-lio_delete_sglist(struct lio_instr_queue *txq)
-{
- struct lio_device *lio_dev = txq->lio_dev;
- int iq_no = txq->q_index;
- struct lio_gather *g;
-
- if (lio_dev->glist_head == NULL)
- return;
-
- do {
- g = (struct lio_gather *)list_delete_first_node(
- &lio_dev->glist_head[iq_no]);
- if (g) {
- if (g->sg)
- rte_free(
- (void *)((unsigned long)g->sg - g->adjust));
- rte_free(g);
- }
- } while (g);
-}
-
-/**
- * \brief Setup gather lists
- * @param lio per-network private data
- */
-int
-lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
- int fw_mapped_iq, int num_descs, unsigned int socket_id)
-{
- struct lio_gather *g;
- int i;
-
- rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
-
- STAILQ_INIT(&lio_dev->glist_head[iq_no]);
-
- for (i = 0; i < num_descs; i++) {
- g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
- socket_id);
- if (g == NULL) {
- lio_dev_err(lio_dev,
- "lio_gather memory allocation failed for qno %d\n",
- iq_no);
- break;
- }
-
- g->sg_size =
- ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
-
- g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (g->sg == NULL) {
- lio_dev_err(lio_dev,
- "sg list memory allocation failed for qno %d\n",
- iq_no);
- rte_free(g);
- break;
- }
-
- /* The gather component should be aligned on 64-bit boundary */
- if (((unsigned long)g->sg) & 7) {
- g->adjust = 8 - (((unsigned long)g->sg) & 7);
- g->sg =
- (struct lio_sg_entry *)((unsigned long)g->sg +
- g->adjust);
- }
-
- STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
- entries);
- }
-
- if (i != num_descs) {
- lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
- return -ENOMEM;
- }
-
- return 0;
-}
-
-void
-lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
-{
- lio_delete_instr_queue(lio_dev, iq_no);
- rte_free(lio_dev->instr_queue[iq_no]);
- lio_dev->instr_queue[iq_no] = NULL;
- lio_dev->num_iqs--;
-}
-
-static inline uint32_t
-lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
-{
- return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
- (uint32_t)rte_atomic64_read(
- &lio_dev->instr_queue[q_no]->instr_pending));
-}
-
-static inline int
-lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
-{
- return ((uint32_t)rte_atomic64_read(
- &lio_dev->instr_queue[q_no]->instr_pending) >=
- (lio_dev->instr_queue[q_no]->nb_desc - 2));
-}
-
-static int
-lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- uint32_t count = 10000;
-
- while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
- --count)
- lio_flush_iq(lio_dev, iq);
-
- return count ? 0 : 1;
-}
-
-static void
-lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
-{
- struct lio_soft_command *sc = sc_ptr;
- struct lio_dev_ctrl_cmd *ctrl_cmd;
- struct lio_ctrl_pkt *ctrl_pkt;
-
- ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
- ctrl_cmd = ctrl_pkt->ctrl_cmd;
- ctrl_cmd->cond = 1;
-
- lio_free_soft_command(sc);
-}
-
-static inline struct lio_soft_command *
-lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
- struct lio_ctrl_pkt *ctrl_pkt)
-{
- struct lio_soft_command *sc = NULL;
- uint32_t uddsize, datasize;
- uint32_t rdatasize;
- uint8_t *data;
-
- uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
-
- datasize = OCTEON_CMD_SIZE + uddsize;
- rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
-
- sc = lio_alloc_soft_command(lio_dev, datasize,
- rdatasize, sizeof(struct lio_ctrl_pkt));
- if (sc == NULL)
- return NULL;
-
- rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
-
- data = (uint8_t *)sc->virtdptr;
-
- rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
-
- lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
-
- if (uddsize) {
- /* Endian-Swap for UDD should have been done by caller. */
- rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
- }
-
- sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
-
- lio_prepare_soft_command(lio_dev, sc,
- LIO_OPCODE, LIO_OPCODE_CMD,
- 0, 0, 0);
-
- sc->callback = lio_ctrl_cmd_callback;
- sc->callback_arg = sc;
- sc->wait_time = ctrl_pkt->wait_time;
-
- return sc;
-}
-
-int
-lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
-{
- struct lio_soft_command *sc = NULL;
- int retval;
-
- sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
- if (sc == NULL) {
- lio_dev_err(lio_dev, "soft command allocation failed\n");
- return -1;
- }
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_free_soft_command(sc);
- lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
- lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
- return -1;
- }
-
- return retval;
-}
-
-/** Send data packet to the device
- * @param lio_dev - lio device pointer
- * @param ndata - control structure with queueing, and buffer information
- *
- * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-static inline int
-lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
-{
- return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
- ndata->buf, ndata->datasize, ndata->reqtype);
-}
-
-uint16_t
-lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
-{
- struct lio_instr_queue *txq = tx_queue;
- union lio_cmd_setup cmdsetup;
- struct lio_device *lio_dev;
- struct lio_iq_stats *stats;
- struct lio_data_pkt ndata;
- int i, processed = 0;
- struct rte_mbuf *m;
- uint32_t tag = 0;
- int status = 0;
- int iq_no;
-
- lio_dev = txq->lio_dev;
- iq_no = txq->txpciq.s.q_no;
- stats = &lio_dev->instr_queue[iq_no]->stats;
-
- if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
- PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
- lio_dev->linfo.link.s.link_up);
- goto xmit_failed;
- }
-
- lio_dev_cleanup_iq(lio_dev, iq_no);
-
- for (i = 0; i < nb_pkts; i++) {
- uint32_t pkt_len = 0;
-
- m = pkts[i];
-
- /* Prepare the attributes for the data to be passed to BASE. */
- memset(&ndata, 0, sizeof(struct lio_data_pkt));
-
- ndata.buf = m;
-
- ndata.q_no = iq_no;
- if (lio_iq_is_full(lio_dev, ndata.q_no)) {
- stats->tx_iq_busy++;
- if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
- PMD_TX_LOG(lio_dev, ERR,
- "Transmit failed iq:%d full\n",
- ndata.q_no);
- break;
- }
- }
-
- cmdsetup.cmd_setup64 = 0;
- cmdsetup.s.iq_no = iq_no;
-
- /* check checksum offload flags to form cmd */
- if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
- cmdsetup.s.ip_csum = 1;
-
- if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
- cmdsetup.s.tnl_csum = 1;
- else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
- (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
- cmdsetup.s.transport_csum = 1;
-
- if (m->nb_segs == 1) {
- pkt_len = rte_pktmbuf_data_len(m);
- cmdsetup.s.u.datasize = pkt_len;
- lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
- &cmdsetup, tag);
- ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
- ndata.reqtype = LIO_REQTYPE_NORESP_NET;
- } else {
- struct lio_buf_free_info *finfo;
- struct lio_gather *g;
- rte_iova_t phyaddr;
- int i, frags;
-
- finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
- sizeof(*finfo), 0);
- if (finfo == NULL) {
- PMD_TX_LOG(lio_dev, ERR,
- "free buffer alloc failed\n");
- goto xmit_failed;
- }
-
- rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
- g = (struct lio_gather *)list_delete_first_node(
- &lio_dev->glist_head[iq_no]);
- rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
- if (g == NULL) {
- PMD_TX_LOG(lio_dev, ERR,
- "Transmit scatter gather: glist null!\n");
- goto xmit_failed;
- }
-
- cmdsetup.s.gather = 1;
- cmdsetup.s.u.gatherptrs = m->nb_segs;
- lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
- &cmdsetup, tag);
-
- memset(g->sg, 0, g->sg_size);
- g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
- lio_add_sg_size(&g->sg[0], m->data_len, 0);
- pkt_len = m->data_len;
- finfo->mbuf = m;
-
- /* First seg taken care above */
- frags = m->nb_segs - 1;
- i = 1;
- m = m->next;
- while (frags--) {
- g->sg[(i >> 2)].ptr[(i & 3)] =
- rte_mbuf_data_iova(m);
- lio_add_sg_size(&g->sg[(i >> 2)],
- m->data_len, (i & 3));
- pkt_len += m->data_len;
- i++;
- m = m->next;
- }
-
- phyaddr = rte_mem_virt2iova(g->sg);
- if (phyaddr == RTE_BAD_IOVA) {
- PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
- goto xmit_failed;
- }
-
- ndata.cmd.cmd3.dptr = phyaddr;
- ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
-
- finfo->g = g;
- finfo->lio_dev = lio_dev;
- finfo->iq_no = (uint64_t)iq_no;
- ndata.buf = finfo;
- }
-
- ndata.datasize = pkt_len;
-
- status = lio_send_data_pkt(lio_dev, &ndata);
-
- if (unlikely(status == LIO_IQ_SEND_FAILED)) {
- PMD_TX_LOG(lio_dev, ERR, "send failed\n");
- break;
- }
-
- if (unlikely(status == LIO_IQ_SEND_STOP)) {
- PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
- /* create space as iq is full */
- lio_dev_cleanup_iq(lio_dev, iq_no);
- }
-
- stats->tx_done++;
- stats->tx_tot_bytes += pkt_len;
- processed++;
- }
-
-xmit_failed:
- stats->tx_dropped += (nb_pkts - processed);
-
- return processed;
-}
-
-void
-lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
-{
- struct lio_instr_queue *txq;
- struct lio_droq *rxq;
- uint16_t i;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- if (txq != NULL) {
- lio_dev_tx_queue_release(eth_dev, i);
- eth_dev->data->tx_queues[i] = NULL;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- if (rxq != NULL) {
- lio_dev_rx_queue_release(eth_dev, i);
- eth_dev->data->rx_queues[i] = NULL;
- }
- }
-}
diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
deleted file mode 100644
index d2a45104f0..0000000000
--- a/drivers/net/liquidio/lio_rxtx.h
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_RXTX_H_
-#define _LIO_RXTX_H_
-
-#include <stdio.h>
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-#include <rte_memory.h>
-
-#include "lio_struct.h"
-
-#ifndef ROUNDUP4
-#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
-#endif
-
-#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem) \
- (type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
-
-#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
-
-#define lio_uptime \
- (size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
-
-/** Descriptor format.
- * The descriptor ring is made of descriptors which have 2 64-bit values:
- * -# Physical (bus) address of the data buffer.
- * -# Physical (bus) address of a lio_droq_info structure.
- * The device DMA's incoming packets and its information at the address
- * given by these descriptor fields.
- */
-struct lio_droq_desc {
- /** The buffer pointer */
- uint64_t buffer_ptr;
-
- /** The Info pointer */
- uint64_t info_ptr;
-};
-
-#define LIO_DROQ_DESC_SIZE (sizeof(struct lio_droq_desc))
-
-/** Information about packet DMA'ed by Octeon.
- * The format of the information available at Info Pointer after Octeon
- * has posted a packet. Not all descriptors have valid information. Only
- * the Info field of the first descriptor for a packet has information
- * about the packet.
- */
-struct lio_droq_info {
- /** The Output Receive Header. */
- union octeon_rh rh;
-
- /** The Length of the packet. */
- uint64_t length;
-};
-
-#define LIO_DROQ_INFO_SIZE (sizeof(struct lio_droq_info))
-
-/** Pointer to data buffer.
- * Driver keeps a pointer to the data buffer that it made available to
- * the Octeon device. Since the descriptor ring keeps physical (bus)
- * addresses, this field is required for the driver to keep track of
- * the virtual address pointers.
- */
-struct lio_recv_buffer {
- /** Packet buffer, including meta data. */
- void *buffer;
-
- /** Data in the packet buffer. */
- uint8_t *data;
-
-};
-
-#define LIO_DROQ_RECVBUF_SIZE (sizeof(struct lio_recv_buffer))
-
-#define LIO_DROQ_SIZE (sizeof(struct lio_droq))
-
-#define LIO_IQ_SEND_OK 0
-#define LIO_IQ_SEND_STOP 1
-#define LIO_IQ_SEND_FAILED -1
-
-/* conditions */
-#define LIO_REQTYPE_NONE 0
-#define LIO_REQTYPE_NORESP_NET 1
-#define LIO_REQTYPE_NORESP_NET_SG 2
-#define LIO_REQTYPE_SOFT_COMMAND 3
-
-struct lio_request_list {
- uint32_t reqtype;
- void *buf;
-};
-
-/*---------------------- INSTRUCTION FORMAT ----------------------------*/
-
-struct lio_instr3_64B {
- /** Pointer where the input data is available. */
- uint64_t dptr;
-
- /** Instruction Header. */
- uint64_t ih3;
-
- /** Instruction Header. */
- uint64_t pki_ih3;
-
- /** Input Request Header. */
- uint64_t irh;
-
- /** opcode/subcode specific parameters */
- uint64_t ossp[2];
-
- /** Return Data Parameters */
- uint64_t rdp;
-
- /** Pointer where the response for a RAW mode packet will be written
- * by Octeon.
- */
- uint64_t rptr;
-
-};
-
-union lio_instr_64B {
- struct lio_instr3_64B cmd3;
-};
-
-/** The size of each buffer in soft command buffer pool */
-#define LIO_SOFT_COMMAND_BUFFER_SIZE 1536
-
-/** Maximum number of buffers to allocate into soft command buffer pool */
-#define LIO_MAX_SOFT_COMMAND_BUFFERS 255
-
-struct lio_soft_command {
- /** Soft command buffer info. */
- struct lio_stailq_node node;
- uint64_t dma_addr;
- uint32_t size;
-
- /** Command and return status */
- union lio_instr_64B cmd;
-
-#define LIO_COMPLETION_WORD_INIT 0xffffffffffffffffULL
- uint64_t *status_word;
-
- /** Data buffer info */
- void *virtdptr;
- uint64_t dmadptr;
- uint32_t datasize;
-
- /** Return buffer info */
- void *virtrptr;
- uint64_t dmarptr;
- uint32_t rdatasize;
-
- /** Context buffer info */
- void *ctxptr;
- uint32_t ctxsize;
-
- /** Time out and callback */
- size_t wait_time;
- size_t timeout;
- uint32_t iq_no;
- void (*callback)(uint32_t, void *);
- void *callback_arg;
- struct rte_mbuf *mbuf;
-};
-
-struct lio_iq_post_status {
- int status;
- int index;
-};
-
-/* wqe
- * --------------- 0
- * | wqe word0-3 |
- * --------------- 32
- * | PCI IH |
- * --------------- 40
- * | RPTR |
- * --------------- 48
- * | PCI IRH |
- * --------------- 56
- * | OCTEON_CMD |
- * --------------- 64
- * | Addtl 8-BData |
- * | |
- * ---------------
- */
-
-union octeon_cmd {
- uint64_t cmd64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t cmd : 5;
-
- uint64_t more : 6; /* How many udd words follow the command */
-
- uint64_t reserved : 29;
-
- uint64_t param1 : 16;
-
- uint64_t param2 : 8;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
- uint64_t param2 : 8;
-
- uint64_t param1 : 16;
-
- uint64_t reserved : 29;
-
- uint64_t more : 6;
-
- uint64_t cmd : 5;
-
-#endif
- } s;
-};
-
-#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
-
-/* Maximum number of 8-byte words can be
- * sent in a NIC control message.
- */
-#define LIO_MAX_NCTRL_UDD 32
-
-/* Structure of control information passed by driver to the BASE
- * layer when sending control commands to Octeon device software.
- */
-struct lio_ctrl_pkt {
- /** Command to be passed to the Octeon device software. */
- union octeon_cmd ncmd;
-
- /** Send buffer */
- void *data;
- uint64_t dmadata;
-
- /** Response buffer */
- void *rdata;
- uint64_t dmardata;
-
- /** Additional data that may be needed by some commands. */
- uint64_t udd[LIO_MAX_NCTRL_UDD];
-
- /** Input queue to use to send this command. */
- uint64_t iq_no;
-
- /** Time to wait for Octeon software to respond to this control command.
- * If wait_time is 0, BASE assumes no response is expected.
- */
- size_t wait_time;
-
- struct lio_dev_ctrl_cmd *ctrl_cmd;
-};
-
-/** Structure of data information passed by driver to the BASE
- * layer when forwarding data to Octeon device software.
- */
-struct lio_data_pkt {
- /** Pointer to information maintained by NIC module for this packet. The
- * BASE layer passes this as-is to the driver.
- */
- void *buf;
-
- /** Type of buffer passed in "buf" above. */
- uint32_t reqtype;
-
- /** Total data bytes to be transferred in this command. */
- uint32_t datasize;
-
- /** Command to be passed to the Octeon device software. */
- union lio_instr_64B cmd;
-
- /** Input queue to use to send this command. */
- uint32_t q_no;
-};
-
-/** Structure passed by driver to BASE layer to prepare a command to send
- * network data to Octeon.
- */
-union lio_cmd_setup {
- struct {
- uint32_t iq_no : 8;
- uint32_t gather : 1;
- uint32_t timestamp : 1;
- uint32_t ip_csum : 1;
- uint32_t transport_csum : 1;
- uint32_t tnl_csum : 1;
- uint32_t rsvd : 19;
-
- union {
- uint32_t datasize;
- uint32_t gatherptrs;
- } u;
- } s;
-
- uint64_t cmd_setup64;
-};
-
-/* Instruction Header */
-struct octeon_instr_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** Reserved3 */
- uint64_t reserved3 : 1;
-
- /** Gather indicator 1=gather*/
- uint64_t gather : 1;
-
- /** Data length OR no. of entries in gather list */
- uint64_t dlengsz : 14;
-
- /** Front Data size */
- uint64_t fsz : 6;
-
- /** Reserved2 */
- uint64_t reserved2 : 4;
-
- /** PKI port kind - PKIND */
- uint64_t pkind : 6;
-
- /** Reserved1 */
- uint64_t reserved1 : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- /** Reserved1 */
- uint64_t reserved1 : 32;
-
- /** PKI port kind - PKIND */
- uint64_t pkind : 6;
-
- /** Reserved2 */
- uint64_t reserved2 : 4;
-
- /** Front Data size */
- uint64_t fsz : 6;
-
- /** Data length OR no. of entries in gather list */
- uint64_t dlengsz : 14;
-
- /** Gather indicator 1=gather*/
- uint64_t gather : 1;
-
- /** Reserved3 */
- uint64_t reserved3 : 1;
-
-#endif
-};
-
-/* PKI Instruction Header(PKI IH) */
-struct octeon_instr_pki_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** Wider bit */
- uint64_t w : 1;
-
- /** Raw mode indicator 1 = RAW */
- uint64_t raw : 1;
-
- /** Use Tag */
- uint64_t utag : 1;
-
- /** Use QPG */
- uint64_t uqpg : 1;
-
- /** Reserved2 */
- uint64_t reserved2 : 1;
-
- /** Parse Mode */
- uint64_t pm : 3;
-
- /** Skip Length */
- uint64_t sl : 8;
-
- /** Use Tag Type */
- uint64_t utt : 1;
-
- /** Tag type */
- uint64_t tagtype : 2;
-
- /** Reserved1 */
- uint64_t reserved1 : 2;
-
- /** QPG Value */
- uint64_t qpg : 11;
-
- /** Tag Value */
- uint64_t tag : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
- /** Tag Value */
- uint64_t tag : 32;
-
- /** QPG Value */
- uint64_t qpg : 11;
-
- /** Reserved1 */
- uint64_t reserved1 : 2;
-
- /** Tag type */
- uint64_t tagtype : 2;
-
- /** Use Tag Type */
- uint64_t utt : 1;
-
- /** Skip Length */
- uint64_t sl : 8;
-
- /** Parse Mode */
- uint64_t pm : 3;
-
- /** Reserved2 */
- uint64_t reserved2 : 1;
-
- /** Use QPG */
- uint64_t uqpg : 1;
-
- /** Use Tag */
- uint64_t utag : 1;
-
- /** Raw mode indicator 1 = RAW */
- uint64_t raw : 1;
-
- /** Wider bit */
- uint64_t w : 1;
-#endif
-};
-
-/** Input Request Header */
-struct octeon_instr_irh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t opcode : 4;
- uint64_t rflag : 1;
- uint64_t subcode : 7;
- uint64_t vlan : 12;
- uint64_t priority : 3;
- uint64_t reserved : 5;
- uint64_t ossp : 32; /* opcode/subcode specific parameters */
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t ossp : 32; /* opcode/subcode specific parameters */
- uint64_t reserved : 5;
- uint64_t priority : 3;
- uint64_t vlan : 12;
- uint64_t subcode : 7;
- uint64_t rflag : 1;
- uint64_t opcode : 4;
-#endif
-};
-
-/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
-#define OCTEON_SOFT_CMD_RESP_IH3 (40 + 8)
-/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
-#define OCTEON_PCI_CMD_O3 (24 + 8)
-
-/** Return Data Parameters */
-struct octeon_instr_rdp {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t reserved : 49;
- uint64_t pcie_port : 3;
- uint64_t rlen : 12;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t rlen : 12;
- uint64_t pcie_port : 3;
- uint64_t reserved : 49;
-#endif
-};
-
-union octeon_packet_params {
- uint32_t pkt_params32;
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint32_t reserved : 24;
- uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
- /* Perform Outer transport header checksum */
- uint32_t transport_csum : 1;
- /* Find tunnel, and perform transport csum. */
- uint32_t tnl_csum : 1;
- uint32_t tsflag : 1; /* Timestamp this packet */
- uint32_t ipsec_ops : 4; /* IPsec operation */
-#else
- uint32_t ipsec_ops : 4;
- uint32_t tsflag : 1;
- uint32_t tnl_csum : 1;
- uint32_t transport_csum : 1;
- uint32_t ip_csum : 1;
- uint32_t reserved : 7;
-#endif
- } s;
-};
-
-/** Utility function to prepare a 64B NIC instruction based on a setup command
- * @param cmd - pointer to instruction to be filled in.
- * @param setup - pointer to the setup structure
- * @param q_no - which queue for back pressure
- *
- * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
- */
-static inline void
-lio_prepare_pci_cmd(struct lio_device *lio_dev,
- union lio_instr_64B *cmd,
- union lio_cmd_setup *setup,
- uint32_t tag)
-{
- union octeon_packet_params packet_params;
- struct octeon_instr_pki_ih3 *pki_ih3;
- struct octeon_instr_irh *irh;
- struct octeon_instr_ih3 *ih3;
- int port;
-
- memset(cmd, 0, sizeof(union lio_instr_64B));
-
- ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
- pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
-
- /* assume that rflag is cleared so therefore front data will only have
- * irh and ossp[1] and ossp[2] for a total of 24 bytes
- */
- ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
- /* PKI IH */
- ih3->fsz = OCTEON_PCI_CMD_O3;
-
- if (!setup->s.gather) {
- ih3->dlengsz = setup->s.u.datasize;
- } else {
- ih3->gather = 1;
- ih3->dlengsz = setup->s.u.gatherptrs;
- }
-
- pki_ih3->w = 1;
- pki_ih3->raw = 0;
- pki_ih3->utag = 0;
- pki_ih3->utt = 1;
- pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
-
- port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
-
- if (tag)
- pki_ih3->tag = tag;
- else
- pki_ih3->tag = LIO_DATA(port);
-
- pki_ih3->tagtype = OCTEON_ORDERED_TAG;
- pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
- pki_ih3->pm = 0x0; /* parse from L2 */
- pki_ih3->sl = 32; /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
-
- irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
-
- irh->opcode = LIO_OPCODE;
- irh->subcode = LIO_OPCODE_NW_DATA;
-
- packet_params.pkt_params32 = 0;
- packet_params.s.ip_csum = setup->s.ip_csum;
- packet_params.s.transport_csum = setup->s.transport_csum;
- packet_params.s.tnl_csum = setup->s.tnl_csum;
- packet_params.s.tsflag = setup->s.timestamp;
-
- irh->ossp = packet_params.pkt_params32;
-}
-
-int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
-void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev,
- uint32_t datasize, uint32_t rdatasize,
- uint32_t ctxsize);
-void lio_prepare_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc,
- uint8_t opcode, uint8_t subcode,
- uint32_t irh_ossp, uint64_t ossp0,
- uint64_t ossp1);
-int lio_send_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc);
-void lio_free_soft_command(struct lio_soft_command *sc);
-
-/** Send control packet to the device
- * @param lio_dev - lio device pointer
- * @param nctrl - control structure with command, timeout, and callback info
- *
- * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-int lio_send_ctrl_pkt(struct lio_device *lio_dev,
- struct lio_ctrl_pkt *ctrl_pkt);
-
-/** Maximum ordered requests to process in every invocation of
- * lio_process_ordered_list(). The function will continue to process requests
- * as long as it can find one that has finished processing. If it keeps
- * finding requests that have completed, the function can run for ever. The
- * value defined here sets an upper limit on the number of requests it can
- * process before it returns control to the poll thread.
- */
-#define LIO_MAX_ORD_REQS_TO_PROCESS 4096
-
-/** Error codes used in Octeon Host-Core communication.
- *
- * 31 16 15 0
- * ----------------------------
- * | | |
- * ----------------------------
- * Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
- * are reserved to identify the group to which the error code belongs. The
- * lower 16-bits, called Minor Error Number, carry the actual code.
- *
- * So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
- */
-/** Status for a request.
- * If the request is successfully queued, the driver will return
- * a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
- * the driver if the response for request failed to arrive before a
- * time-out period or if the request processing * got interrupted due to
- * a signal respectively.
- */
-enum {
- /** A value of 0x00000000 indicates no error i.e. success */
- LIO_REQUEST_DONE = 0x00000000,
- /** (Major number: 0x0000; Minor Number: 0x0001) */
- LIO_REQUEST_PENDING = 0x00000001,
- LIO_REQUEST_TIMEOUT = 0x00000003,
-
-};
-
-/*------ Error codes used by firmware (bits 15..0 set by firmware */
-#define LIO_FIRMWARE_MAJOR_ERROR_CODE 0x0001
-#define LIO_FIRMWARE_STATUS_CODE(status) \
- ((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
-
-/** Initialize the response lists. The number of response lists to create is
- * given by count.
- * @param lio_dev - the lio device structure.
- */
-void lio_setup_response_list(struct lio_device *lio_dev);
-
-/** Check the status of first entry in the ordered list. If the instruction at
- * that entry finished processing or has timed-out, the entry is cleaned.
- * @param lio_dev - the lio device structure.
- * @return 1 if the ordered list is empty, 0 otherwise.
- */
-int lio_process_ordered_list(struct lio_device *lio_dev);
-
-#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count) \
- (((lio_dev)->instr_queue[iq_no]->stats.field) += count)
-
-static inline void
-lio_swap_8B_data(uint64_t *data, uint32_t blocks)
-{
- while (blocks) {
- *data = rte_cpu_to_be_64(*data);
- blocks--;
- data++;
- }
-}
-
-static inline uint64_t
-lio_map_ring(void *buf)
-{
- rte_iova_t dma_addr;
-
- dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
-
- return (uint64_t)dma_addr;
-}
-
-static inline uint64_t
-lio_map_ring_info(struct lio_droq *droq, uint32_t i)
-{
- rte_iova_t dma_addr;
-
- dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
-
- return (uint64_t)dma_addr;
-}
-
-static inline int
-lio_opcode_slow_path(union octeon_rh *rh)
-{
- uint16_t subcode1, subcode2;
-
- subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
- subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
-
- return subcode2 != subcode1;
-}
-
-static inline void
-lio_add_sg_size(struct lio_sg_entry *sg_entry,
- uint16_t size, uint32_t pos)
-{
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- sg_entry->u.size[pos] = size;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- sg_entry->u.size[3 - pos] = size;
-#endif
-}
-
-/* Macro to increment index.
- * Index is incremented by count; if the sum exceeds
- * max, index is wrapped-around to the start.
- */
-static inline uint32_t
-lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
-{
- if ((index + count) >= max)
- index = index + count - max;
- else
- index += count;
-
- return index;
-}
-
-int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
- int desc_size, struct rte_mempool *mpool,
- unsigned int socket_id);
-uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t budget);
-void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
-
-void lio_delete_sglist(struct lio_instr_queue *txq);
-int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
- int fw_mapped_iq, int num_descs, unsigned int socket_id);
-uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
- uint16_t nb_pkts);
-int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
-int lio_setup_iq(struct lio_device *lio_dev, int q_index,
- union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
- unsigned int socket_id);
-int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
-void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
-/** Setup instruction queue zero for the device
- * @param lio_dev which lio device to setup
- *
- * @return 0 if success. -1 if fails
- */
-int lio_setup_instr_queue0(struct lio_device *lio_dev);
-void lio_free_instr_queue0(struct lio_device *lio_dev);
-void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
-#endif /* _LIO_RXTX_H_ */
diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
deleted file mode 100644
index 10270c560e..0000000000
--- a/drivers/net/liquidio/lio_struct.h
+++ /dev/null
@@ -1,661 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_STRUCT_H_
-#define _LIO_STRUCT_H_
-
-#include <stdio.h>
-#include <stdint.h>
-#include <sys/queue.h>
-
-#include <rte_spinlock.h>
-#include <rte_atomic.h>
-
-#include "lio_hw_defs.h"
-
-struct lio_stailq_node {
- STAILQ_ENTRY(lio_stailq_node) entries;
-};
-
-STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
-
-struct lio_version {
- uint16_t major;
- uint16_t minor;
- uint16_t micro;
- uint16_t reserved;
-};
-
-/** Input Queue statistics. Each input queue has four stats fields. */
-struct lio_iq_stats {
- uint64_t instr_posted; /**< Instructions posted to this queue. */
- uint64_t instr_processed; /**< Instructions processed in this queue. */
- uint64_t instr_dropped; /**< Instructions that could not be processed */
- uint64_t bytes_sent; /**< Bytes sent through this queue. */
- uint64_t tx_done; /**< Num of packets sent to network. */
- uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
- uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
- uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
-};
-
-/** Output Queue statistics. Each output queue has four stats fields. */
-struct lio_droq_stats {
- /** Number of packets received in this queue. */
- uint64_t pkts_received;
-
- /** Bytes received by this queue. */
- uint64_t bytes_received;
-
- /** Packets dropped due to no memory available. */
- uint64_t dropped_nomem;
-
- /** Packets dropped due to large number of pkts to process. */
- uint64_t dropped_toomany;
-
- /** Number of packets sent to stack from this queue. */
- uint64_t rx_pkts_received;
-
- /** Number of Bytes sent to stack from this queue. */
- uint64_t rx_bytes_received;
-
- /** Num of Packets dropped due to receive path failures. */
- uint64_t rx_dropped;
-
- /** Num of vxlan packets received; */
- uint64_t rx_vxlan;
-
- /** Num of failures of rte_pktmbuf_alloc() */
- uint64_t rx_alloc_failure;
-
-};
-
-/** The Descriptor Ring Output Queue structure.
- * This structure has all the information required to implement a
- * DROQ.
- */
-struct lio_droq {
- /** A spinlock to protect access to this ring. */
- rte_spinlock_t lock;
-
- uint32_t q_no;
-
- uint32_t pkt_count;
-
- struct lio_device *lio_dev;
-
- /** The 8B aligned descriptor ring starts at this address. */
- struct lio_droq_desc *desc_ring;
-
- /** Index in the ring where the driver should read the next packet */
- uint32_t read_idx;
-
- /** Index in the ring where Octeon will write the next packet */
- uint32_t write_idx;
-
- /** Index in the ring where the driver will refill the descriptor's
- * buffer
- */
- uint32_t refill_idx;
-
- /** Packets pending to be processed */
- rte_atomic64_t pkts_pending;
-
- /** Number of descriptors in this ring. */
- uint32_t nb_desc;
-
- /** The number of descriptors pending refill. */
- uint32_t refill_count;
-
- uint32_t refill_threshold;
-
- /** The 8B aligned info ptrs begin from this address. */
- struct lio_droq_info *info_list;
-
- /** The receive buffer list. This list has the virtual addresses of the
- * buffers.
- */
- struct lio_recv_buffer *recv_buf_list;
-
- /** The size of each buffer pointed by the buffer pointer. */
- uint32_t buffer_size;
-
- /** Pointer to the mapped packet credit register.
- * Host writes number of info/buffer ptrs available to this register
- */
- void *pkts_credit_reg;
-
- /** Pointer to the mapped packet sent register.
- * Octeon writes the number of packets DMA'ed to host memory
- * in this register.
- */
- void *pkts_sent_reg;
-
- /** Statistics for this DROQ. */
- struct lio_droq_stats stats;
-
- /** DMA mapped address of the DROQ descriptor ring. */
- size_t desc_ring_dma;
-
- /** Info ptr list are allocated at this virtual address. */
- size_t info_base_addr;
-
- /** DMA mapped address of the info list */
- size_t info_list_dma;
-
- /** Allocated size of info list. */
- uint32_t info_alloc_size;
-
- /** Memory zone **/
- const struct rte_memzone *desc_ring_mz;
- const struct rte_memzone *info_mz;
- struct rte_mempool *mpool;
-};
-
-/** Receive Header */
-union octeon_rh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t rh64;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t reserved : 17;
- uint64_t ossp : 32; /** opcode/subcode specific parameters */
- } r;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t extra : 28;
- uint64_t vlan : 12;
- uint64_t priority : 3;
- uint64_t csum_verified : 3; /** checksum verified. */
- uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
- uint64_t encap_on : 1;
- uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
- } r_dh;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t reserved : 8;
- uint64_t extra : 25;
- uint64_t gmxport : 16;
- } r_nic_info;
-#else
- uint64_t rh64;
- struct {
- uint64_t ossp : 32; /** opcode/subcode specific parameters */
- uint64_t reserved : 17;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r;
- struct {
- uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
- uint64_t encap_on : 1;
- uint64_t has_hwtstamp : 1; /** 1 = has hwtstamp */
- uint64_t csum_verified : 3; /** checksum verified. */
- uint64_t priority : 3;
- uint64_t vlan : 12;
- uint64_t extra : 28;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r_dh;
- struct {
- uint64_t gmxport : 16;
- uint64_t extra : 25;
- uint64_t reserved : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r_nic_info;
-#endif
-};
-
-#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
-
-/** The txpciq info passed to host from the firmware */
-union octeon_txpciq {
- uint64_t txpciq64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t q_no : 8;
- uint64_t port : 8;
- uint64_t pkind : 6;
- uint64_t use_qpg : 1;
- uint64_t qpg : 11;
- uint64_t aura_num : 10;
- uint64_t reserved : 20;
-#else
- uint64_t reserved : 20;
- uint64_t aura_num : 10;
- uint64_t qpg : 11;
- uint64_t use_qpg : 1;
- uint64_t pkind : 6;
- uint64_t port : 8;
- uint64_t q_no : 8;
-#endif
- } s;
-};
-
-/** The instruction (input) queue.
- * The input queue is used to post raw (instruction) mode data or packet
- * data to Octeon device from the host. Each input queue for
- * a LIO device has one such structure to represent it.
- */
-struct lio_instr_queue {
- /** A spinlock to protect access to the input ring. */
- rte_spinlock_t lock;
-
- rte_spinlock_t post_lock;
-
- struct lio_device *lio_dev;
-
- uint32_t pkt_in_done;
-
- rte_atomic64_t iq_flush_running;
-
- /** Flag that indicates if the queue uses 64 byte commands. */
- uint32_t iqcmd_64B:1;
-
- /** Queue info. */
- union octeon_txpciq txpciq;
-
- uint32_t rsvd:17;
-
- uint32_t status:8;
-
- /** Number of descriptors in this ring. */
- uint32_t nb_desc;
-
- /** Index in input ring where the driver should write the next packet */
- uint32_t host_write_index;
-
- /** Index in input ring where Octeon is expected to read the next
- * packet.
- */
- uint32_t lio_read_index;
-
- /** This index aids in finding the window in the queue where Octeon
- * has read the commands.
- */
- uint32_t flush_index;
-
- /** This field keeps track of the instructions pending in this queue. */
- rte_atomic64_t instr_pending;
-
- /** Pointer to the Virtual Base addr of the input ring. */
- uint8_t *base_addr;
-
- struct lio_request_list *request_list;
-
- /** Octeon doorbell register for the ring. */
- void *doorbell_reg;
-
- /** Octeon instruction count register for this ring. */
- void *inst_cnt_reg;
-
- /** Number of instructions pending to be posted to Octeon. */
- uint32_t fill_cnt;
-
- /** Statistics for this input queue. */
- struct lio_iq_stats stats;
-
- /** DMA mapped base address of the input descriptor ring. */
- uint64_t base_addr_dma;
-
- /** Application context */
- void *app_ctx;
-
- /* network stack queue index */
- int q_index;
-
- /* Memory zone */
- const struct rte_memzone *iq_mz;
-};
-
-/** This structure is used by driver to store information required
- * to free the mbuff when the packet has been fetched by Octeon.
- * Bytes offset below assume worst-case of a 64-bit system.
- */
-struct lio_buf_free_info {
- /** Bytes 1-8. Pointer to network device private structure. */
- struct lio_device *lio_dev;
-
- /** Bytes 9-16. Pointer to mbuff. */
- struct rte_mbuf *mbuf;
-
- /** Bytes 17-24. Pointer to gather list. */
- struct lio_gather *g;
-
- /** Bytes 25-32. Physical address of mbuf->data or gather list. */
- uint64_t dptr;
-
- /** Bytes 33-47. Piggybacked soft command, if any */
- struct lio_soft_command *sc;
-
- /** Bytes 48-63. iq no */
- uint64_t iq_no;
-};
-
-/* The Scatter-Gather List Entry. The scatter or gather component used with
- * input instruction has this format.
- */
-struct lio_sg_entry {
- /** The first 64 bit gives the size of data in each dptr. */
- union {
- uint16_t size[4];
- uint64_t size64;
- } u;
-
- /** The 4 dptr pointers for this entry. */
- uint64_t ptr[4];
-};
-
-#define LIO_SG_ENTRY_SIZE (sizeof(struct lio_sg_entry))
-
-/** Structure of a node in list of gather components maintained by
- * driver for each network device.
- */
-struct lio_gather {
- /** List manipulation. Next and prev pointers. */
- struct lio_stailq_node list;
-
- /** Size of the gather component at sg in bytes. */
- int sg_size;
-
- /** Number of bytes that sg was adjusted to make it 8B-aligned. */
- int adjust;
-
- /** Gather component that can accommodate max sized fragment list
- * received from the IP layer.
- */
- struct lio_sg_entry *sg;
-};
-
-struct lio_rss_ctx {
- uint16_t hash_key_size;
- uint8_t hash_key[LIO_RSS_MAX_KEY_SZ];
- /* Ideally a factor of number of queues */
- uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
- uint8_t itable_size;
- uint8_t ip;
- uint8_t tcp_hash;
- uint8_t ipv6;
- uint8_t ipv6_tcp_hash;
- uint8_t ipv6_ex;
- uint8_t ipv6_tcp_ex_hash;
- uint8_t hash_disable;
-};
-
-struct lio_io_enable {
- uint64_t iq;
- uint64_t oq;
- uint64_t iq64B;
-};
-
-struct lio_fn_list {
- void (*setup_iq_regs)(struct lio_device *, uint32_t);
- void (*setup_oq_regs)(struct lio_device *, uint32_t);
-
- int (*setup_mbox)(struct lio_device *);
- void (*free_mbox)(struct lio_device *);
-
- int (*setup_device_regs)(struct lio_device *);
- int (*enable_io_queues)(struct lio_device *);
- void (*disable_io_queues)(struct lio_device *);
-};
-
-struct lio_pf_vf_hs_word {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- /** PKIND value assigned for the DPI interface */
- uint64_t pkind : 8;
-
- /** OCTEON core clock multiplier */
- uint64_t core_tics_per_us : 16;
-
- /** OCTEON coprocessor clock multiplier */
- uint64_t coproc_tics_per_us : 16;
-
- /** app that currently running on OCTEON */
- uint64_t app_mode : 8;
-
- /** RESERVED */
- uint64_t reserved : 16;
-
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** RESERVED */
- uint64_t reserved : 16;
-
- /** app that currently running on OCTEON */
- uint64_t app_mode : 8;
-
- /** OCTEON coprocessor clock multiplier */
- uint64_t coproc_tics_per_us : 16;
-
- /** OCTEON core clock multiplier */
- uint64_t core_tics_per_us : 16;
-
- /** PKIND value assigned for the DPI interface */
- uint64_t pkind : 8;
-#endif
-};
-
-struct lio_sriov_info {
- /** Number of rings assigned to VF */
- uint32_t rings_per_vf;
-
- /** Number of VF devices enabled */
- uint32_t num_vfs;
-};
-
-/* Head of a response list */
-struct lio_response_list {
- /** List structure to add delete pending entries to */
- struct lio_stailq_head head;
-
- /** A lock for this response list */
- rte_spinlock_t lock;
-
- rte_atomic64_t pending_req_count;
-};
-
-/* Structure to define the configuration attributes for each Input queue. */
-struct lio_iq_config {
- /* Max number of IQs available */
- uint8_t max_iqs;
-
- /** Pending list size (usually set to the sum of the size of all Input
- * queues)
- */
- uint32_t pending_list_size;
-
- /** Command size - 32 or 64 bytes */
- uint32_t instr_type;
-};
-
-/* Structure to define the configuration attributes for each Output queue. */
-struct lio_oq_config {
- /* Max number of OQs available */
- uint8_t max_oqs;
-
- /** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
- uint32_t info_ptr;
-
- /** The number of buffers that were consumed during packet processing by
- * the driver on this Output queue before the driver attempts to
- * replenish the descriptor ring with new buffers.
- */
- uint32_t refill_threshold;
-};
-
-/* Structure to define the configuration. */
-struct lio_config {
- uint16_t card_type;
- const char *card_name;
-
- /** Input Queue attributes. */
- struct lio_iq_config iq;
-
- /** Output Queue attributes. */
- struct lio_oq_config oq;
-
- int num_nic_ports;
-
- int num_def_tx_descs;
-
- /* Num of desc for rx rings */
- int num_def_rx_descs;
-
- int def_rx_buf_size;
-};
-
-/** Status of a RGMII Link on Octeon as seen by core driver. */
-union octeon_link_status {
- uint64_t link_status64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t duplex : 8;
- uint64_t mtu : 16;
- uint64_t speed : 16;
- uint64_t link_up : 1;
- uint64_t autoneg : 1;
- uint64_t if_mode : 5;
- uint64_t pause : 1;
- uint64_t flashing : 1;
- uint64_t reserved : 15;
-#else
- uint64_t reserved : 15;
- uint64_t flashing : 1;
- uint64_t pause : 1;
- uint64_t if_mode : 5;
- uint64_t autoneg : 1;
- uint64_t link_up : 1;
- uint64_t speed : 16;
- uint64_t mtu : 16;
- uint64_t duplex : 8;
-#endif
- } s;
-};
-
-/** The rxpciq info passed to host from the firmware */
-union octeon_rxpciq {
- uint64_t rxpciq64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t q_no : 8;
- uint64_t reserved : 56;
-#else
- uint64_t reserved : 56;
- uint64_t q_no : 8;
-#endif
- } s;
-};
-
-/** Information for a OCTEON ethernet interface shared between core & host. */
-struct octeon_link_info {
- union octeon_link_status link;
- uint64_t hw_addr;
-
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t gmxport : 16;
- uint64_t macaddr_is_admin_assigned : 1;
- uint64_t vlan_is_admin_assigned : 1;
- uint64_t rsvd : 30;
- uint64_t num_txpciq : 8;
- uint64_t num_rxpciq : 8;
-#else
- uint64_t num_rxpciq : 8;
- uint64_t num_txpciq : 8;
- uint64_t rsvd : 30;
- uint64_t vlan_is_admin_assigned : 1;
- uint64_t macaddr_is_admin_assigned : 1;
- uint64_t gmxport : 16;
-#endif
-
- union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
- union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
-};
-
-/* ----------------------- THE LIO DEVICE --------------------------- */
-/** The lio device.
- * Each lio device has this structure to represent all its
- * components.
- */
-struct lio_device {
- /** PCI device pointer */
- struct rte_pci_device *pci_dev;
-
- /** Octeon Chip type */
- uint16_t chip_id;
- uint16_t pf_num;
- uint16_t vf_num;
-
- /** This device's PCIe port used for traffic. */
- uint16_t pcie_port;
-
- /** The state of this device */
- rte_atomic64_t status;
-
- uint8_t intf_open;
-
- struct octeon_link_info linfo;
-
- uint8_t *hw_addr;
-
- struct lio_fn_list fn_list;
-
- uint32_t num_iqs;
-
- /** Guards each glist */
- rte_spinlock_t *glist_lock;
- /** Array of gather component linked lists */
- struct lio_stailq_head *glist_head;
-
- /* The pool containing pre allocated buffers used for soft commands */
- struct rte_mempool *sc_buf_pool;
-
- /** The input instruction queues */
- struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
-
- /** The singly-linked tail queues of instruction response */
- struct lio_response_list response_list;
-
- uint32_t num_oqs;
-
- /** The DROQ output queues */
- struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
-
- struct lio_io_enable io_qmask;
-
- struct lio_sriov_info sriov_info;
-
- struct lio_pf_vf_hs_word pfvf_hsword;
-
- /** Mail Box details of each lio queue. */
- struct lio_mbox **mbox;
-
- char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
-
- const struct lio_config *default_config;
-
- struct rte_eth_dev *eth_dev;
-
- uint64_t ifflags;
- uint8_t max_rx_queues;
- uint8_t max_tx_queues;
- uint8_t nb_rx_queues;
- uint8_t nb_tx_queues;
- uint8_t port_configured;
- struct lio_rss_ctx rss_state;
- uint16_t port_id;
- char firmware_version[LIO_FW_VERSION_LENGTH];
-};
-#endif /* _LIO_STRUCT_H_ */
diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
deleted file mode 100644
index ebadbf3dea..0000000000
--- a/drivers/net/liquidio/meson.build
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-sources = files(
- 'base/lio_23xx_vf.c',
- 'base/lio_mbox.c',
- 'lio_ethdev.c',
- 'lio_rxtx.c',
-)
-includes += include_directories('base')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index b1df17ce8c..f68bbc27a7 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -36,7 +36,6 @@ drivers = [
'ipn3ke',
'ixgbe',
'kni',
- 'liquidio',
'mana',
'memif',
'mlx4',
--
2.40.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver
2023-04-28 10:31 [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver jerinj
@ 2023-05-02 14:18 ` Ferruh Yigit
2023-05-08 13:44 ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
1 sibling, 0 replies; 4+ messages in thread
From: Ferruh Yigit @ 2023-05-02 14:18 UTC (permalink / raw)
To: jerinj, dev, Thomas Monjalon, Shijith Thotton,
Srisivasubramanian Srinivasan, Anatoly Burakov, David Marchand
On 4/28/2023 11:31 AM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> The LiquidIO product line has been substituted with CN9K/CN10K
> OCTEON product line smart NICs located at drivers/net/octeon_ep/.
>
> DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> because of the absence of updates in the driver.
>
> Due to the above reasons, the driver removed from DPDK 23.07.
>
> Also removed deprecation notice entry for the removal in
> doc/guides/rel_notes/deprecation.rst.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
> MAINTAINERS | 8 -
> doc/guides/nics/features/liquidio.ini | 29 -
> doc/guides/nics/index.rst | 1 -
> doc/guides/nics/liquidio.rst | 169 --
> doc/guides/rel_notes/deprecation.rst | 7 -
> doc/guides/rel_notes/release_23_07.rst | 9 +-
> drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
> drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
> drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
> drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
> drivers/net/liquidio/base/lio_mbox.c | 246 ---
> drivers/net/liquidio/base/lio_mbox.h | 102 -
> drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
> drivers/net/liquidio/lio_ethdev.h | 179 --
> drivers/net/liquidio/lio_logs.h | 58 -
> drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
> drivers/net/liquidio/lio_rxtx.h | 740 --------
> drivers/net/liquidio/lio_struct.h | 661 -------
> drivers/net/liquidio/meson.build | 16 -
> drivers/net/meson.build | 1 -
> 20 files changed, 1 insertion(+), 7156 deletions(-)
> delete mode 100644 doc/guides/nics/features/liquidio.ini
> delete mode 100644 doc/guides/nics/liquidio.rst
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
> delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
> delete mode 100644 drivers/net/liquidio/lio_ethdev.c
> delete mode 100644 drivers/net/liquidio/lio_ethdev.h
> delete mode 100644 drivers/net/liquidio/lio_logs.h
> delete mode 100644 drivers/net/liquidio/lio_rxtx.c
> delete mode 100644 drivers/net/liquidio/lio_rxtx.h
> delete mode 100644 drivers/net/liquidio/lio_struct.h
> delete mode 100644 drivers/net/liquidio/meson.build
>
This cause warning in the ABI check script [1], not because there is an
ABI breakage, but because how script works, that needs to be fixed as well.
[1]
Checking ABI compatibility of build-gcc-shared
.../dpdk-next-net/devtools/../devtools/check-abi.sh
/tmp/dpdk-abiref/v22.11.1/build-gcc-shared
.../dpdk-next-net/build-gcc-shared/install
Error: cannot find librte_net_liquidio.so.23.0 in
.../dpdk-next-net/build-gcc-shared/install
<...>
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -59,14 +59,7 @@ New Features
> Removed Items
> -------------
>
> -.. This section should contain removed items in this release. Sample format:
> -
> - * Add a short 1-2 sentence description of the removed item
> - in the past tense.
> -
> - This section is a comment. Do not overwrite or remove it.
> - Also, make sure to start the actual text at the margin.
> - =======================================================
> +* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
>
>
No need to remove the section comment.
Rest looks good to me.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH v2] net/liquidio: remove LiquidIO ethdev driver
2023-04-28 10:31 [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver jerinj
2023-05-02 14:18 ` Ferruh Yigit
@ 2023-05-08 13:44 ` jerinj
2023-05-17 15:47 ` Jerin Jacob
1 sibling, 1 reply; 4+ messages in thread
From: jerinj @ 2023-05-08 13:44 UTC (permalink / raw)
To: dev, Thomas Monjalon, Anatoly Burakov
Cc: david.marchand, ferruh.yigit, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
The LiquidIO product line has been substituted with CN9K/CN10K
OCTEON product line smart NICs located at drivers/net/octeon_ep/.
DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
because of the absence of updates in the driver.
Due to the above reasons, the driver removed from DPDK 23.07.
Also removed deprecation notice entry for the removal in
doc/guides/rel_notes/deprecation.rst and skipped removed
driver file in ABI check in devtools/libabigail.abignore.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
v2:
- Skip driver ABI check (Ferruh)
- Addressed the review comments in
http://patches.dpdk.org/project/dpdk/patch/20230428103127.1059989-1-jerinj@marvell.com/ (Ferruh)
MAINTAINERS | 8 -
devtools/libabigail.abignore | 1 +
doc/guides/nics/features/liquidio.ini | 29 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/liquidio.rst | 169 --
doc/guides/rel_notes/deprecation.rst | 7 -
doc/guides/rel_notes/release_23_07.rst | 2 +
drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
drivers/net/liquidio/base/lio_mbox.c | 246 ---
drivers/net/liquidio/base/lio_mbox.h | 102 -
drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
drivers/net/liquidio/lio_ethdev.h | 179 --
drivers/net/liquidio/lio_logs.h | 58 -
drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
drivers/net/liquidio/lio_rxtx.h | 740 --------
drivers/net/liquidio/lio_struct.h | 661 -------
drivers/net/liquidio/meson.build | 16 -
drivers/net/meson.build | 1 -
21 files changed, 3 insertions(+), 7148 deletions(-)
delete mode 100644 doc/guides/nics/features/liquidio.ini
delete mode 100644 doc/guides/nics/liquidio.rst
delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
delete mode 100644 drivers/net/liquidio/lio_ethdev.c
delete mode 100644 drivers/net/liquidio/lio_ethdev.h
delete mode 100644 drivers/net/liquidio/lio_logs.h
delete mode 100644 drivers/net/liquidio/lio_rxtx.c
delete mode 100644 drivers/net/liquidio/lio_rxtx.h
delete mode 100644 drivers/net/liquidio/lio_struct.h
delete mode 100644 drivers/net/liquidio/meson.build
diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e5099..0157c26dd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -681,14 +681,6 @@ F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
F: doc/guides/nics/features/thunderx.ini
-Cavium LiquidIO - UNMAINTAINED
-M: Shijith Thotton <sthotton@marvell.com>
-M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/liquidio/
-F: doc/guides/nics/liquidio.rst
-F: doc/guides/nics/features/liquidio.ini
-
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..c0361bfc7b 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -25,6 +25,7 @@
;
; SKIP_LIBRARY=librte_common_mlx5_glue
; SKIP_LIBRARY=librte_net_mlx4_glue
+; SKIP_LIBRARY=librte_net_liquidio
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Experimental APIs exceptions ;
diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
deleted file mode 100644
index a8bde282e0..0000000000
--- a/doc/guides/nics/features/liquidio.ini
+++ /dev/null
@@ -1,29 +0,0 @@
-;
-; Supported features of the 'LiquidIO' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Link status = Y
-Link status event = Y
-MTU update = Y
-Scattered Rx = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-VLAN filter = Y
-CRC offload = Y
-VLAN offload = P
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Basic stats = Y
-Extended stats = Y
-Multiprocess aware = Y
-Linux = Y
-x86-64 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 5c9d1edf5e..31296822e5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -44,7 +44,6 @@ Network Interface Controller Drivers
ipn3ke
ixgbe
kni
- liquidio
mana
memif
mlx4
diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
deleted file mode 100644
index f893b3b539..0000000000
--- a/doc/guides/nics/liquidio.rst
+++ /dev/null
@@ -1,169 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Cavium, Inc
-
-LiquidIO VF Poll Mode Driver
-============================
-
-The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
-Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
-done using kernel driver.
-
-More information can be found at `Cavium Official Website
-<http://cavium.com/LiquidIO_Adapters.html>`_.
-
-Supported LiquidIO Adapters
------------------------------
-
-- LiquidIO II CN2350 210SV/225SV
-- LiquidIO II CN2350 210SVPT
-- LiquidIO II CN2360 210SV/225SV
-- LiquidIO II CN2360 210SVPT
-
-
-SR-IOV: Prerequisites and Sample Application Notes
---------------------------------------------------
-
-This section provides instructions to configure SR-IOV with Linux OS.
-
-#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
-
- .. code-block:: console
-
- lspci -s <slot> -vvv
-
- Example output:
-
- .. code-block:: console
-
- [...]
- Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
- [...]
- Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
- [...]
- Kernel driver in use: LiquidIO
-
-#. Load the kernel module:
-
- .. code-block:: console
-
- modprobe liquidio
-
-#. Bring up the PF ports:
-
- .. code-block:: console
-
- ifconfig p4p1 up
- ifconfig p4p2 up
-
-#. Change PF MTU if required:
-
- .. code-block:: console
-
- ifconfig p4p1 mtu 9000
- ifconfig p4p2 mtu 9000
-
-#. Create VF device(s):
-
- Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
- of the parent PF.
-
- .. code-block:: console
-
- echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
- echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
-
-#. Assign VF MAC address:
-
- Assign MAC address to the VF using iproute2 utility. The syntax is::
-
- ip link set <PF iface> vf <VF id> mac <macaddr>
-
- Example output:
-
- .. code-block:: console
-
- ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
-
-#. Assign VF(s) to VM.
-
- The VF devices may be passed through to the guest VM using qemu or
- virt-manager or virsh etc.
-
- Example qemu guest launch command:
-
- .. code-block:: console
-
- ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
- -cpu host -m 4096 -smp 4 \
- -drive file=<disk_file>,if=none,id=disk1,format=<type> \
- -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
- -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
-
-#. Running testpmd
-
- Refer to the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
- ``testpmd`` application.
-
- .. note::
-
- Use ``igb_uio`` instead of ``vfio-pci`` in VM.
-
- Example output:
-
- .. code-block:: console
-
- [...]
- EAL: PCI device 0000:03:00.3 on NUMA socket 0
- EAL: probe driver: 177d:9712 net_liovf
- EAL: using IOMMU type 1 (Type 1)
- PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
- EAL: PCI device 0000:03:08.3 on NUMA socket 0
- EAL: probe driver: 177d:9712 net_liovf
- PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
- Interactive-mode selected
- USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
- Configuring Port 0 (socket 0)
- PMD: net_liovf[03:00.3]INFO: Starting port 0
- Port 0: F2:A8:1B:5E:B4:66
- Configuring Port 1 (socket 0)
- PMD: net_liovf[03:08.3]INFO: Starting port 1
- Port 1: 32:76:CC:EE:56:D7
- Checking link statuses...
- Port 0 Link Up - speed 10000 Mbps - full-duplex
- Port 1 Link Up - speed 10000 Mbps - full-duplex
- Done
- testpmd>
-
-#. Enabling VF promiscuous mode
-
- One VF per PF can be marked as trusted for promiscuous mode.
-
- .. code-block:: console
-
- ip link set dev <PF iface> vf <VF id> trust on
-
-
-Limitations
------------
-
-VF MTU
-~~~~~~
-
-VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
-
-VLAN offload
-~~~~~~~~~~~~
-
-Tx VLAN insertion is not supported and consequently VLAN offload feature is
-marked partial.
-
-Ring size
-~~~~~~~~~
-
-Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..8e1cdd677a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -121,13 +121,6 @@ Deprecation Notices
* net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
This decision has been made to alleviate the burden of maintaining a discontinued product.
-* net/liquidio: Remove LiquidIO ethdev driver.
- The LiquidIO product line has been substituted
- with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
- DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
- because of the absence of updates in the driver.
- Due to the above reasons, the driver will be unavailable from DPDK 23.07.
-
* cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
to have another parameter ``qp_id`` to return the queue pair ID
which got error interrupt to the application,
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..f13a7b32b6 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -68,6 +68,8 @@ Removed Items
Also, make sure to start the actual text at the margin.
=======================================================
+* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
+
API Changes
-----------
diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
deleted file mode 100644
index 9f28504b53..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_reg.h
+++ /dev/null
@@ -1,165 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_REG_H_
-#define _LIO_23XX_REG_H_
-
-/* ###################### REQUEST QUEUE ######################### */
-
-/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
-#define CN23XX_SLI_PKT_INSTR_BADDR_START64 0x10010
-
-/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
-#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START 0x10020
-
-/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START 0x10030
-
-/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
-#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64 0x10040
-
-/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
- * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
- */
-#define CN23XX_SLI_PKT_INPUT_CONTROL_START64 0x10000
-
-/* ------- Request Queue Macros --------- */
-
-/* Each Input Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_IQ_OFFSET 0x20000
-
-#define CN23XX_SLI_IQ_PKT_CONTROL64(iq) \
- (CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_BASE_ADDR64(iq) \
- (CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_SIZE(iq) \
- (CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_DOORBELL(iq) \
- (CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_INSTR_COUNT64(iq) \
- (CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-/* Number of instructions to be read in one MAC read request.
- * setting to Max value(4)
- */
-#define CN23XX_PKT_INPUT_CTL_RDSIZE (3 << 25)
-#define CN23XX_PKT_INPUT_CTL_IS_64B (1 << 24)
-#define CN23XX_PKT_INPUT_CTL_RST (1 << 23)
-#define CN23XX_PKT_INPUT_CTL_QUIET (1 << 28)
-#define CN23XX_PKT_INPUT_CTL_RING_ENB (1 << 22)
-#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP (1 << 6)
-#define CN23XX_PKT_INPUT_CTL_USE_CSR (1 << 4)
-#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP (2)
-
-/* These bits[47:44] select the Physical function number within the MAC */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS 45
-/* These bits[43:32] select the function number within the PF */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS 32
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK \
- (CN23XX_PKT_INPUT_CTL_RDSIZE | \
- CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
- CN23XX_PKT_INPUT_CTL_USE_CSR)
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK \
- (CN23XX_PKT_INPUT_CTL_RDSIZE | \
- CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
- CN23XX_PKT_INPUT_CTL_USE_CSR | \
- CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
-#endif
-
-/* ############################ OUTPUT QUEUE ######################### */
-
-/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
-#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START 0x10050
-
-/* 64 registers for Output queue buffer and info size
- * SLI_PKT(0..63)_OUT_SIZE
- */
-#define CN23XX_SLI_PKT_OUT_SIZE 0x10060
-
-/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
-#define CN23XX_SLI_SLIST_BADDR_START64 0x10070
-
-/* 64 registers for Output Queue Packet Credits
- * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
- */
-#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START 0x10080
-
-/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START 0x10090
-
-/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
-#define CN23XX_SLI_PKT_CNTS_START 0x100B0
-
-/* Each Output Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_OQ_OFFSET 0x20000
-
-/* ------- Output Queue Macros --------- */
-
-#define CN23XX_SLI_OQ_PKT_CONTROL(oq) \
- (CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BASE_ADDR64(oq) \
- (CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_SIZE(oq) \
- (CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq) \
- (CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_SENT(oq) \
- (CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_CREDIT(oq) \
- (CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-/* ------------------ Masks ---------------- */
-#define CN23XX_PKT_OUTPUT_CTL_IPTR (1 << 11)
-#define CN23XX_PKT_OUTPUT_CTL_ES (1 << 9)
-#define CN23XX_PKT_OUTPUT_CTL_NSR (1 << 8)
-#define CN23XX_PKT_OUTPUT_CTL_ROR (1 << 7)
-#define CN23XX_PKT_OUTPUT_CTL_DPTR (1 << 6)
-#define CN23XX_PKT_OUTPUT_CTL_BMODE (1 << 5)
-#define CN23XX_PKT_OUTPUT_CTL_ES_P (1 << 3)
-#define CN23XX_PKT_OUTPUT_CTL_NSR_P (1 << 2)
-#define CN23XX_PKT_OUTPUT_CTL_ROR_P (1 << 1)
-#define CN23XX_PKT_OUTPUT_CTL_RING_ENB (1 << 0)
-
-/* Rings per Virtual Function [RO] */
-#define CN23XX_PKT_INPUT_CTL_RPVF_MASK 0x3F
-#define CN23XX_PKT_INPUT_CTL_RPVF_POS 48
-
-/* These bits[47:44][RO] give the Physical function
- * number info within the MAC
- */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK 0x7
-
-/* These bits[43:32][RO] give the virtual function
- * number info within the PF
- */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK 0x1FFF
-
-/* ######################### Mailbox Reg Macros ######################## */
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START 0x10200
-#define CN23XX_VF_SLI_PKT_MBOX_INT_START 0x10210
-
-#define CN23XX_SLI_MBOX_OFFSET 0x20000
-#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET 0x8
-
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx) \
- (CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START + \
- ((q) * CN23XX_SLI_MBOX_OFFSET + \
- (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
-
-#define CN23XX_VF_SLI_PKT_MBOX_INT(q) \
- (CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
-
-#endif /* _LIO_23XX_REG_H_ */
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
deleted file mode 100644
index c6b8310b71..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.c
+++ /dev/null
@@ -1,513 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <string.h>
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_23xx_reg.h"
-#include "lio_mbox.h"
-
-static int
-cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
-{
- uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
- uint64_t d64, q_no;
- int ret_val = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < num_queues; q_no++) {
- /* set RST bit to 1. This bit applies to both IQ and OQ */
- d64 = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- d64);
- }
-
- /* wait until the RST bit is clear or the RST and QUIET bits are set */
- for (q_no = 0; q_no < num_queues; q_no++) {
- volatile uint64_t reg_val;
-
- reg_val = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
- !(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
- loop) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- loop = loop - 1;
- }
-
- if (loop == 0) {
- lio_dev_err(lio_dev,
- "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
- (unsigned long)q_no);
- return -1;
- }
-
- reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
-
- reg_val = lio_read_csr64(
- lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
- lio_dev_err(lio_dev,
- "clearing the reset failed for qno: %lu\n",
- (unsigned long)q_no);
- ret_val = -1;
- }
- }
-
- return ret_val;
-}
-
-static int
-cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
-{
- uint64_t q_no;
- uint64_t d64;
-
- PMD_INIT_FUNC_TRACE();
-
- if (cn23xx_vf_reset_io_queues(lio_dev,
- lio_dev->sriov_info.rings_per_vf))
- return -1;
-
- for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
- 0xFFFFFFFF);
-
- d64 = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
-
- d64 &= 0xEFFFFFFFFFFFFFFFL;
-
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
- d64);
-
- /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
- * the Input Queues
- */
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- CN23XX_PKT_INPUT_CTL_MASK);
- }
-
- return 0;
-}
-
-static void
-cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
-{
- uint32_t reg_val;
- uint32_t q_no;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
- 0xFFFFFFFF);
-
- reg_val =
- lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
-
- reg_val &= 0xEFFFFFFFFFFFFFFFL;
-
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
-
- reg_val =
- lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
-
- /* set IPTR & DPTR */
- reg_val |=
- (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
-
- /* reset BMODE */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
-
- /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
- * for Output Queue Scatter List
- * reset ROR_P, NSR_P
- */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
-#endif
- /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
- * for Output Queue Data
- * reset ROR, NSR
- */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
- /* set the ES bit */
- reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
-
- /* write all the selected settings */
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
- reg_val);
- }
-}
-
-static int
-cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
-{
- PMD_INIT_FUNC_TRACE();
-
- if (cn23xx_vf_setup_global_input_regs(lio_dev))
- return -1;
-
- cn23xx_vf_setup_global_output_regs(lio_dev);
-
- return 0;
-}
-
-static void
-cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- uint64_t pkt_in_done = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Write the start of the input queue's ring and its size */
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
- iq->base_addr_dma);
- lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
-
- /* Remember the doorbell & instruction count register addr
- * for this queue
- */
- iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_IQ_DOORBELL(iq_no);
- iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
- lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
- iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
-
- /* Store the current instruction counter (used in flush_iq
- * calculation)
- */
- pkt_in_done = rte_read64(iq->inst_cnt_reg);
-
- /* Clear the count by writing back what we read, but don't
- * enable data traffic here
- */
- rte_write64(pkt_in_done, iq->inst_cnt_reg);
-}
-
-static void
-cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
-{
- struct lio_droq *droq = lio_dev->droq[oq_no];
-
- PMD_INIT_FUNC_TRACE();
-
- lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
- droq->desc_ring_dma);
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
-
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
- (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
-
- /* Get the mapped address of the pkt_sent and pkts_credit regs */
- droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_OQ_PKTS_SENT(oq_no);
- droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
-}
-
-static void
-cn23xx_vf_free_mbox(struct lio_device *lio_dev)
-{
- PMD_INIT_FUNC_TRACE();
-
- rte_free(lio_dev->mbox[0]);
- lio_dev->mbox[0] = NULL;
-
- rte_free(lio_dev->mbox);
- lio_dev->mbox = NULL;
-}
-
-static int
-cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
-{
- struct lio_mbox *mbox;
-
- PMD_INIT_FUNC_TRACE();
-
- if (lio_dev->mbox == NULL) {
- lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
- if (lio_dev->mbox == NULL)
- return -ENOMEM;
- }
-
- mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
- if (mbox == NULL) {
- rte_free(lio_dev->mbox);
- lio_dev->mbox = NULL;
- return -ENOMEM;
- }
-
- rte_spinlock_init(&mbox->lock);
-
- mbox->lio_dev = lio_dev;
-
- mbox->q_no = 0;
-
- mbox->state = LIO_MBOX_STATE_IDLE;
-
- /* VF mbox interrupt reg */
- mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_VF_SLI_PKT_MBOX_INT(0);
- /* VF reads from SIG0 reg */
- mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
- /* VF writes into SIG1 reg */
- mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
-
- lio_dev->mbox[0] = mbox;
-
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
- return 0;
-}
-
-static int
-cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
-{
- uint32_t q_no;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
- uint64_t reg_val;
-
- /* set the corresponding IQ IS_64B bit */
- if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
- lio_write_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
- }
-
- /* set the corresponding IQ ENB bit */
- if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
- lio_write_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
- }
- }
- for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
- uint32_t reg_val;
-
- /* set the corresponding OQ ENB bit */
- if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
- reg_val = lio_read_csr(
- lio_dev,
- CN23XX_SLI_OQ_PKT_CONTROL(q_no));
- reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
- lio_write_csr(lio_dev,
- CN23XX_SLI_OQ_PKT_CONTROL(q_no),
- reg_val);
- }
- }
-
- return 0;
-}
-
-static void
-cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
-{
- uint32_t num_queues;
-
- PMD_INIT_FUNC_TRACE();
-
- /* per HRM, rings can only be disabled via reset operation,
- * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
- */
- num_queues = lio_dev->num_iqs;
- if (num_queues < lio_dev->num_oqs)
- num_queues = lio_dev->num_oqs;
-
- cn23xx_vf_reset_io_queues(lio_dev, num_queues);
-}
-
-void
-cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
-{
- struct lio_mbox_cmd mbox_cmd;
-
- memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
- mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
- mbox_cmd.msg.s.resp_needed = 0;
- mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
- mbox_cmd.msg.s.len = 1;
- mbox_cmd.q_no = 0;
- mbox_cmd.recv_len = 0;
- mbox_cmd.recv_status = 0;
- mbox_cmd.fn = NULL;
- mbox_cmd.fn_arg = 0;
-
- lio_mbox_write(lio_dev, &mbox_cmd);
-}
-
-static void
-cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
- struct lio_mbox_cmd *cmd, void *arg)
-{
- uint32_t major = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
- if (cmd->recv_len > 1) {
- struct lio_version *lio_ver = (struct lio_version *)cmd->data;
-
- major = lio_ver->major;
- major = major << 16;
- }
-
- rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
-}
-
-int
-cn23xx_pfvf_handshake(struct lio_device *lio_dev)
-{
- struct lio_mbox_cmd mbox_cmd;
- struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
- uint32_t q_no, count = 0;
- rte_atomic64_t status;
- uint32_t pfmajor;
- uint32_t vfmajor;
- uint32_t ret;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Sending VF_ACTIVE indication to the PF driver */
- lio_dev_dbg(lio_dev, "requesting info from PF\n");
-
- mbox_cmd.msg.mbox_msg64 = 0;
- mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
- mbox_cmd.msg.s.resp_needed = 1;
- mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
- mbox_cmd.msg.s.len = 2;
- mbox_cmd.data[0] = 0;
- lio_ver->major = LIO_BASE_MAJOR_VERSION;
- lio_ver->minor = LIO_BASE_MINOR_VERSION;
- lio_ver->micro = LIO_BASE_MICRO_VERSION;
- mbox_cmd.q_no = 0;
- mbox_cmd.recv_len = 0;
- mbox_cmd.recv_status = 0;
- mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
- mbox_cmd.fn_arg = (void *)&status;
-
- if (lio_mbox_write(lio_dev, &mbox_cmd)) {
- lio_dev_err(lio_dev, "Write to mailbox failed\n");
- return -1;
- }
-
- rte_atomic64_set(&status, 0);
-
- do {
- rte_delay_ms(1);
- } while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
-
- ret = rte_atomic64_read(&status);
- if (ret == 0) {
- lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
- return -1;
- }
-
- for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
- lio_dev->instr_queue[q_no]->txpciq.s.pkind =
- lio_dev->pfvf_hsword.pkind;
-
- vfmajor = LIO_BASE_MAJOR_VERSION;
- pfmajor = ret >> 16;
- if (pfmajor != vfmajor) {
- lio_dev_err(lio_dev,
- "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
- vfmajor, pfmajor);
- ret = -EPERM;
- } else {
- lio_dev_dbg(lio_dev,
- "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
- vfmajor, pfmajor);
- ret = 0;
- }
-
- lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
- lio_dev->pfvf_hsword.pkind);
-
- return ret;
-}
-
-void
-cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
-{
- uint64_t mbox_int_val;
-
- /* read and clear by writing 1 */
- mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
- rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
- if (lio_mbox_read(lio_dev->mbox[0]))
- lio_mbox_process_message(lio_dev->mbox[0]);
-}
-
-int
-cn23xx_vf_setup_device(struct lio_device *lio_dev)
-{
- uint64_t reg_val;
-
- PMD_INIT_FUNC_TRACE();
-
- /* INPUT_CONTROL[RPVF] gives the VF IOq count */
- reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
-
- lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
- CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
- lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
- CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
-
- reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
-
- lio_dev->sriov_info.rings_per_vf =
- reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
-
- lio_dev->default_config = lio_get_conf(lio_dev);
- if (lio_dev->default_config == NULL)
- return -1;
-
- lio_dev->fn_list.setup_iq_regs = cn23xx_vf_setup_iq_regs;
- lio_dev->fn_list.setup_oq_regs = cn23xx_vf_setup_oq_regs;
- lio_dev->fn_list.setup_mbox = cn23xx_vf_setup_mbox;
- lio_dev->fn_list.free_mbox = cn23xx_vf_free_mbox;
-
- lio_dev->fn_list.setup_device_regs = cn23xx_vf_setup_device_regs;
-
- lio_dev->fn_list.enable_io_queues = cn23xx_vf_enable_io_queues;
- lio_dev->fn_list.disable_io_queues = cn23xx_vf_disable_io_queues;
-
- return 0;
-}
-
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
deleted file mode 100644
index 8e5362db15..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_VF_H_
-#define _LIO_23XX_VF_H_
-
-#include <stdio.h>
-
-#include "lio_struct.h"
-
-static const struct lio_config default_cn23xx_conf = {
- .card_type = LIO_23XX,
- .card_name = LIO_23XX_NAME,
- /** IQ attributes */
- .iq = {
- .max_iqs = CN23XX_CFG_IO_QUEUES,
- .pending_list_size =
- (CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
- .instr_type = OCTEON_64BYTE_INSTR,
- },
-
- /** OQ attributes */
- .oq = {
- .max_oqs = CN23XX_CFG_IO_QUEUES,
- .info_ptr = OCTEON_OQ_INFOPTR_MODE,
- .refill_threshold = CN23XX_OQ_REFIL_THRESHOLD,
- },
-
- .num_nic_ports = CN23XX_DEFAULT_NUM_PORTS,
- .num_def_rx_descs = CN23XX_MAX_OQ_DESCRIPTORS,
- .num_def_tx_descs = CN23XX_MAX_IQ_DESCRIPTORS,
- .def_rx_buf_size = CN23XX_OQ_BUF_SIZE,
-};
-
-static inline const struct lio_config *
-lio_get_conf(struct lio_device *lio_dev)
-{
- const struct lio_config *default_lio_conf = NULL;
-
- /* check the LIO Device model & return the corresponding lio
- * configuration
- */
- default_lio_conf = &default_cn23xx_conf;
-
- if (default_lio_conf == NULL) {
- lio_dev_err(lio_dev, "Configuration verification failed\n");
- return NULL;
- }
-
- return default_lio_conf;
-}
-
-#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT 100000
-
-void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
-
-int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
-
-int cn23xx_vf_setup_device(struct lio_device *lio_dev);
-
-void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
-#endif /* _LIO_23XX_VF_H_ */
diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
deleted file mode 100644
index 5e119c1241..0000000000
--- a/drivers/net/liquidio/base/lio_hw_defs.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_HW_DEFS_H_
-#define _LIO_HW_DEFS_H_
-
-#include <rte_io.h>
-
-#ifndef PCI_VENDOR_ID_CAVIUM
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#endif
-
-#define LIO_CN23XX_VF_VID 0x9712
-
-/* CN23xx subsystem device ids */
-#define PCI_SUBSYS_DEV_ID_CN2350_210 0x0004
-#define PCI_SUBSYS_DEV_ID_CN2360_210 0x0005
-#define PCI_SUBSYS_DEV_ID_CN2360_225 0x0006
-#define PCI_SUBSYS_DEV_ID_CN2350_225 0x0007
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3 0x0008
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3 0x0009
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT 0x000a
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT 0x000b
-
-/* --------------------------CONFIG VALUES------------------------ */
-
-/* CN23xx IQ configuration macros */
-#define CN23XX_MAX_RINGS_PER_PF 64
-#define CN23XX_MAX_RINGS_PER_VF 8
-
-#define CN23XX_MAX_INPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_IQ_DESCRIPTORS 512
-#define CN23XX_MIN_IQ_DESCRIPTORS 128
-
-#define CN23XX_MAX_OUTPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_OQ_DESCRIPTORS 512
-#define CN23XX_MIN_OQ_DESCRIPTORS 128
-#define CN23XX_OQ_BUF_SIZE 1536
-
-#define CN23XX_OQ_REFIL_THRESHOLD 16
-
-#define CN23XX_DEFAULT_NUM_PORTS 1
-
-#define CN23XX_CFG_IO_QUEUES CN23XX_MAX_RINGS_PER_PF
-
-/* common OCTEON configuration macros */
-#define OCTEON_64BYTE_INSTR 64
-#define OCTEON_OQ_INFOPTR_MODE 1
-
-/* Max IOQs per LIO Link */
-#define LIO_MAX_IOQS_PER_IF 64
-
-/* Wait time in milliseconds for FLR */
-#define LIO_PCI_FLR_WAIT 100
-
-enum lio_card_type {
- LIO_23XX /* 23xx */
-};
-
-#define LIO_23XX_NAME "23xx"
-
-#define LIO_DEV_RUNNING 0xc
-
-#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg) \
- ((cfg)->default_config->oq.refill_threshold)
-#define LIO_NUM_DEF_TX_DESCS_CFG(cfg) \
- ((cfg)->default_config->num_def_tx_descs)
-
-#define LIO_IQ_INSTR_TYPE(cfg) ((cfg)->default_config->iq.instr_type)
-
-/* The following config values are fixed and should not be modified. */
-
-/* Maximum number of Instruction queues */
-#define LIO_MAX_INSTR_QUEUES(lio_dev) CN23XX_MAX_RINGS_PER_VF
-
-#define LIO_MAX_POSSIBLE_INSTR_QUEUES CN23XX_MAX_INPUT_QUEUES
-#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES CN23XX_MAX_OUTPUT_QUEUES
-
-#define LIO_DEVICE_NAME_LEN 32
-#define LIO_BASE_MAJOR_VERSION 1
-#define LIO_BASE_MINOR_VERSION 5
-#define LIO_BASE_MICRO_VERSION 1
-
-#define LIO_FW_VERSION_LENGTH 32
-
-#define LIO_Q_RECONF_MIN_VERSION "1.7.0"
-#define LIO_VF_TRUST_MIN_VERSION "1.7.1"
-
-/** Tag types used by Octeon cores in its work. */
-enum octeon_tag_type {
- OCTEON_ORDERED_TAG = 0,
- OCTEON_ATOMIC_TAG = 1,
-};
-
-/* pre-defined host->NIC tag values */
-#define LIO_CONTROL (0x11111110)
-#define LIO_DATA(i) (0x11111111 + (i))
-
-/* used for NIC operations */
-#define LIO_OPCODE 1
-
-/* Subcodes are used by host driver/apps to identify the sub-operation
- * for the core. They only need to by unique for a given subsystem.
- */
-#define LIO_OPCODE_SUBCODE(op, sub) \
- ((((op) & 0x0f) << 8) | ((sub) & 0x7f))
-
-/** LIO_OPCODE subcodes */
-/* This subcode is sent by core PCI driver to indicate cores are ready. */
-#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */
-#define LIO_OPCODE_CMD 0x03
-#define LIO_OPCODE_INFO 0x04
-#define LIO_OPCODE_PORT_STATS 0x05
-#define LIO_OPCODE_IF_CFG 0x09
-
-#define LIO_MIN_RX_BUF_SIZE 64
-#define LIO_MAX_RX_PKTLEN (64 * 1024)
-
-/* NIC Command types */
-#define LIO_CMD_CHANGE_MTU 0x1
-#define LIO_CMD_CHANGE_DEVFLAGS 0x3
-#define LIO_CMD_RX_CTL 0x4
-#define LIO_CMD_CLEAR_STATS 0x6
-#define LIO_CMD_SET_RSS 0xD
-#define LIO_CMD_TNL_RX_CSUM_CTL 0x10
-#define LIO_CMD_TNL_TX_CSUM_CTL 0x11
-#define LIO_CMD_ADD_VLAN_FILTER 0x17
-#define LIO_CMD_DEL_VLAN_FILTER 0x18
-#define LIO_CMD_VXLAN_PORT_CONFIG 0x19
-#define LIO_CMD_QUEUE_COUNT_CTL 0x1f
-
-#define LIO_CMD_VXLAN_PORT_ADD 0x0
-#define LIO_CMD_VXLAN_PORT_DEL 0x1
-#define LIO_CMD_RXCSUM_ENABLE 0x0
-#define LIO_CMD_TXCSUM_ENABLE 0x0
-
-/* RX(packets coming from wire) Checksum verification flags */
-/* TCP/UDP csum */
-#define LIO_L4_CSUM_VERIFIED 0x1
-#define LIO_IP_CSUM_VERIFIED 0x2
-
-/* RSS */
-#define LIO_RSS_PARAM_DISABLE_RSS 0x10
-#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED 0x08
-#define LIO_RSS_PARAM_ITABLE_UNCHANGED 0x04
-#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED 0x02
-
-#define LIO_RSS_HASH_IPV4 0x100
-#define LIO_RSS_HASH_TCP_IPV4 0x200
-#define LIO_RSS_HASH_IPV6 0x400
-#define LIO_RSS_HASH_TCP_IPV6 0x1000
-#define LIO_RSS_HASH_IPV6_EX 0x800
-#define LIO_RSS_HASH_TCP_IPV6_EX 0x2000
-
-#define LIO_RSS_OFFLOAD_ALL ( \
- LIO_RSS_HASH_IPV4 | \
- LIO_RSS_HASH_TCP_IPV4 | \
- LIO_RSS_HASH_IPV6 | \
- LIO_RSS_HASH_TCP_IPV6 | \
- LIO_RSS_HASH_IPV6_EX | \
- LIO_RSS_HASH_TCP_IPV6_EX)
-
-#define LIO_RSS_MAX_TABLE_SZ 128
-#define LIO_RSS_MAX_KEY_SZ 40
-#define LIO_RSS_PARAM_SIZE 16
-
-/* Interface flags communicated between host driver and core app. */
-enum lio_ifflags {
- LIO_IFFLAG_PROMISC = 0x01,
- LIO_IFFLAG_ALLMULTI = 0x02,
- LIO_IFFLAG_UNICAST = 0x10
-};
-
-/* Routines for reading and writing CSRs */
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define lio_write_csr(lio_dev, reg_off, value) \
- do { \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- typeof(value) _value = value; \
- PMD_REGS_LOG(_dev, \
- "Write32: Reg: 0x%08lx Val: 0x%08lx\n", \
- (unsigned long)_reg_off, \
- (unsigned long)_value); \
- rte_write32(_value, _dev->hw_addr + _reg_off); \
- } while (0)
-
-#define lio_write_csr64(lio_dev, reg_off, val64) \
- do { \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- typeof(val64) _val64 = val64; \
- PMD_REGS_LOG( \
- _dev, \
- "Write64: Reg: 0x%08lx Val: 0x%016llx\n", \
- (unsigned long)_reg_off, \
- (unsigned long long)_val64); \
- rte_write64(_val64, _dev->hw_addr + _reg_off); \
- } while (0)
-
-#define lio_read_csr(lio_dev, reg_off) \
- ({ \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- uint32_t val = rte_read32(_dev->hw_addr + _reg_off); \
- PMD_REGS_LOG(_dev, \
- "Read32: Reg: 0x%08lx Val: 0x%08lx\n", \
- (unsigned long)_reg_off, \
- (unsigned long)val); \
- val; \
- })
-
-#define lio_read_csr64(lio_dev, reg_off) \
- ({ \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off); \
- PMD_REGS_LOG( \
- _dev, \
- "Read64: Reg: 0x%08lx Val: 0x%016llx\n", \
- (unsigned long)_reg_off, \
- (unsigned long long)val64); \
- val64; \
- })
-#else
-#define lio_write_csr(lio_dev, reg_off, value) \
- rte_write32(value, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_write_csr64(lio_dev, reg_off, val64) \
- rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr(lio_dev, reg_off) \
- rte_read32((lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr64(lio_dev, reg_off) \
- rte_read64((lio_dev)->hw_addr + (reg_off))
-#endif
-#endif /* _LIO_HW_DEFS_H_ */
diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
deleted file mode 100644
index 2ac2b1b334..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.c
+++ /dev/null
@@ -1,246 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_mbox.h"
-
-/**
- * lio_mbox_read:
- * @mbox: Pointer mailbox
- *
- * Reads the 8-bytes of data from the mbox register
- * Writes back the acknowledgment indicating completion of read
- */
-int
-lio_mbox_read(struct lio_mbox *mbox)
-{
- union lio_mbox_message msg;
- int ret = 0;
-
- msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
-
- if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
- return 0;
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
- mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
- msg.mbox_msg64;
- mbox->mbox_req.recv_len++;
- } else {
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
- mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
- msg.mbox_msg64;
- mbox->mbox_resp.recv_len++;
- } else {
- if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
- (msg.s.type == LIO_MBOX_REQUEST)) {
- mbox->state &= ~LIO_MBOX_STATE_IDLE;
- mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
- mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
- mbox->mbox_req.q_no = mbox->q_no;
- mbox->mbox_req.recv_len = 1;
- } else {
- if ((mbox->state &
- LIO_MBOX_STATE_RES_PENDING) &&
- (msg.s.type == LIO_MBOX_RESPONSE)) {
- mbox->state &=
- ~LIO_MBOX_STATE_RES_PENDING;
- mbox->state |=
- LIO_MBOX_STATE_RES_RECEIVING;
- mbox->mbox_resp.msg.mbox_msg64 =
- msg.mbox_msg64;
- mbox->mbox_resp.q_no = mbox->q_no;
- mbox->mbox_resp.recv_len = 1;
- } else {
- rte_write64(LIO_PFVFERR,
- mbox->mbox_read_reg);
- mbox->state |= LIO_MBOX_STATE_ERROR;
- return -1;
- }
- }
- }
- }
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
- if (mbox->mbox_req.recv_len < msg.s.len) {
- ret = 0;
- } else {
- mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
- mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
- ret = 1;
- }
- } else {
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
- if (mbox->mbox_resp.recv_len < msg.s.len) {
- ret = 0;
- } else {
- mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
- mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
- ret = 1;
- }
- } else {
- RTE_ASSERT(0);
- }
- }
-
- rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
-
- return ret;
-}
-
-/**
- * lio_mbox_write:
- * @lio_dev: Pointer lio device
- * @mbox_cmd: Cmd to send to mailbox.
- *
- * Populates the queue specific mbox structure
- * with cmd information.
- * Write the cmd to mbox register
- */
-int
-lio_mbox_write(struct lio_device *lio_dev,
- struct lio_mbox_cmd *mbox_cmd)
-{
- struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
- uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
-
- if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
- !(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
- return LIO_MBOX_STATUS_FAILED;
-
- if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
- !(mbox->state & LIO_MBOX_STATE_IDLE))
- return LIO_MBOX_STATUS_BUSY;
-
- if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
- rte_memcpy(&mbox->mbox_resp, mbox_cmd,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_RES_PENDING;
- }
-
- count = 0;
-
- while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
- rte_delay_ms(1);
- if (count++ == 1000) {
- ret = LIO_MBOX_STATUS_FAILED;
- break;
- }
- }
-
- if (ret == LIO_MBOX_STATUS_SUCCESS) {
- rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
- for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
- count = 0;
- while (rte_read64(mbox->mbox_write_reg) !=
- LIO_PFVFACK) {
- rte_delay_ms(1);
- if (count++ == 1000) {
- ret = LIO_MBOX_STATUS_FAILED;
- break;
- }
- }
- rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
- }
- }
-
- if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- } else {
- if ((!mbox_cmd->msg.s.resp_needed) ||
- (ret == LIO_MBOX_STATUS_FAILED)) {
- mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
- if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
- LIO_MBOX_STATE_REQ_RECEIVED)))
- mbox->state = LIO_MBOX_STATE_IDLE;
- }
- }
-
- return ret;
-}
-
-/**
- * lio_mbox_process_cmd:
- * @mbox: Pointer mailbox
- * @mbox_cmd: Pointer to command received
- *
- * Process the cmd received in mbox
- */
-static int
-lio_mbox_process_cmd(struct lio_mbox *mbox,
- struct lio_mbox_cmd *mbox_cmd)
-{
- struct lio_device *lio_dev = mbox->lio_dev;
-
- if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
- lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
-
- return 0;
-}
-
-/**
- * Process the received mbox message.
- */
-int
-lio_mbox_process_message(struct lio_mbox *mbox)
-{
- struct lio_mbox_cmd mbox_cmd;
-
- if (mbox->state & LIO_MBOX_STATE_ERROR) {
- if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
- LIO_MBOX_STATE_RES_RECEIVING)) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- mbox_cmd.recv_status = 1;
- if (mbox_cmd.fn)
- mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
- mbox_cmd.fn_arg);
-
- return 0;
- }
-
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
- return 0;
- }
-
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- mbox_cmd.recv_status = 0;
- if (mbox_cmd.fn)
- mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
-
- return 0;
- }
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_req,
- sizeof(struct lio_mbox_cmd));
- if (!mbox_cmd.msg.s.resp_needed) {
- mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
- if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- }
-
- lio_mbox_process_cmd(mbox, &mbox_cmd);
-
- return 0;
- }
-
- RTE_ASSERT(0);
-
- return 0;
-}
diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
deleted file mode 100644
index 457917e91f..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.h
+++ /dev/null
@@ -1,102 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_MBOX_H_
-#define _LIO_MBOX_H_
-
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-
-/* Macros for Mail Box Communication */
-
-#define LIO_MBOX_DATA_MAX 32
-
-#define LIO_VF_ACTIVE 0x1
-#define LIO_VF_FLR_REQUEST 0x2
-#define LIO_CORES_CRASHED 0x3
-
-/* Macro for Read acknowledgment */
-#define LIO_PFVFACK 0xffffffffffffffff
-#define LIO_PFVFSIG 0x1122334455667788
-#define LIO_PFVFERR 0xDEADDEADDEADDEAD
-
-enum lio_mbox_cmd_status {
- LIO_MBOX_STATUS_SUCCESS = 0,
- LIO_MBOX_STATUS_FAILED = 1,
- LIO_MBOX_STATUS_BUSY = 2
-};
-
-enum lio_mbox_message_type {
- LIO_MBOX_REQUEST = 0,
- LIO_MBOX_RESPONSE = 1
-};
-
-union lio_mbox_message {
- uint64_t mbox_msg64;
- struct {
- uint16_t type : 1;
- uint16_t resp_needed : 1;
- uint16_t cmd : 6;
- uint16_t len : 8;
- uint8_t params[6];
- } s;
-};
-
-typedef void (*lio_mbox_callback)(void *, void *, void *);
-
-struct lio_mbox_cmd {
- union lio_mbox_message msg;
- uint64_t data[LIO_MBOX_DATA_MAX];
- uint32_t q_no;
- uint32_t recv_len;
- uint32_t recv_status;
- lio_mbox_callback fn;
- void *fn_arg;
-};
-
-enum lio_mbox_state {
- LIO_MBOX_STATE_IDLE = 1,
- LIO_MBOX_STATE_REQ_RECEIVING = 2,
- LIO_MBOX_STATE_REQ_RECEIVED = 4,
- LIO_MBOX_STATE_RES_PENDING = 8,
- LIO_MBOX_STATE_RES_RECEIVING = 16,
- LIO_MBOX_STATE_RES_RECEIVED = 16,
- LIO_MBOX_STATE_ERROR = 32
-};
-
-struct lio_mbox {
- /* A spinlock to protect access to this q_mbox. */
- rte_spinlock_t lock;
-
- struct lio_device *lio_dev;
-
- uint32_t q_no;
-
- enum lio_mbox_state state;
-
- /* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
- void *mbox_int_reg;
-
- /* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
- * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
- */
- void *mbox_write_reg;
-
- /* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
- * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
- */
- void *mbox_read_reg;
-
- struct lio_mbox_cmd mbox_req;
-
- struct lio_mbox_cmd mbox_resp;
-
-};
-
-int lio_mbox_read(struct lio_mbox *mbox);
-int lio_mbox_write(struct lio_device *lio_dev,
- struct lio_mbox_cmd *mbox_cmd);
-int lio_mbox_process_message(struct lio_mbox *mbox);
-#endif /* _LIO_MBOX_H_ */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
deleted file mode 100644
index ebcfbb1a5c..0000000000
--- a/drivers/net/liquidio/lio_ethdev.c
+++ /dev/null
@@ -1,2147 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-#include <rte_alarm.h>
-#include <rte_ether.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-/* Default RSS key in use */
-static uint8_t lio_rss_key[40] = {
- 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
- 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
- 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
- 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
- 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
-};
-
-static const struct rte_eth_desc_lim lio_rx_desc_lim = {
- .nb_max = CN23XX_MAX_OQ_DESCRIPTORS,
- .nb_min = CN23XX_MIN_OQ_DESCRIPTORS,
- .nb_align = 1,
-};
-
-static const struct rte_eth_desc_lim lio_tx_desc_lim = {
- .nb_max = CN23XX_MAX_IQ_DESCRIPTORS,
- .nb_min = CN23XX_MIN_IQ_DESCRIPTORS,
- .nb_align = 1,
-};
-
-/* Wait for control command to reach nic. */
-static uint16_t
-lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
- struct lio_dev_ctrl_cmd *ctrl_cmd)
-{
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-
- while ((ctrl_cmd->cond == 0) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
- rte_delay_ms(1);
- }
-
- return !timeout;
-}
-
-/**
- * \brief Send Rx control command
- * @param eth_dev Pointer to the structure rte_eth_dev
- * @param start_stop whether to start or stop
- */
-static int
-lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
- ctrl_pkt.ncmd.s.param1 = start_stop;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send RX Control message\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "RX Control command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/* store statistics names and its offset in stats structure */
-struct rte_lio_xstats_name_off {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- unsigned int offset;
-};
-
-static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
- {"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
- {"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
- {"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
- {"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
- {"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
- {"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
- {"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
- {"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
- {"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
- {"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
- {"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
- {"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
- {"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
- {"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_broadcast_pkts",
- (offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_multicast_pkts",
- (offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_total_collisions", (offsetof(struct octeon_tx_stats,
- total_collisions)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
- sizeof(struct octeon_rx_stats)},
-};
-
-#define LIO_NB_XSTATS RTE_DIM(rte_lio_stats_strings)
-
-/* Get hw stats of the port */
-static int
-lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- struct octeon_link_stats *hw_stats;
- struct lio_link_stats_resp *resp;
- struct lio_soft_command *sc;
- uint32_t resp_size;
- unsigned int i;
- int retval;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (n < LIO_NB_XSTATS)
- return LIO_NB_XSTATS;
-
- resp_size = sizeof(struct lio_link_stats_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return -ENOMEM;
-
- resp = (struct lio_link_stats_resp *)sc->virtrptr;
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_PORT_STATS, 0, 0, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
- retval);
- goto get_stats_fail;
- }
-
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- lio_process_ordered_list(lio_dev);
- rte_delay_ms(1);
- }
-
- retval = resp->status;
- if (retval) {
- lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
- goto get_stats_fail;
- }
-
- lio_swap_8B_data((uint64_t *)(&resp->link_stats),
- sizeof(struct octeon_link_stats) >> 3);
-
- hw_stats = &resp->link_stats;
-
- for (i = 0; i < LIO_NB_XSTATS; i++) {
- xstats[i].id = i;
- xstats[i].value =
- *(uint64_t *)(((char *)hw_stats) +
- rte_lio_stats_strings[i].offset);
- }
-
- lio_free_soft_command(sc);
-
- return LIO_NB_XSTATS;
-
-get_stats_fail:
- lio_free_soft_command(sc);
-
- return -1;
-}
-
-static int
-lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned limit __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- unsigned int i;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (xstats_names == NULL)
- return LIO_NB_XSTATS;
-
- /* Note: limit checked in rte_eth_xstats_names() */
-
- for (i = 0; i < LIO_NB_XSTATS; i++) {
- snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
- "%s", rte_lio_stats_strings[i].name);
- }
-
- return LIO_NB_XSTATS;
-}
-
-/* Reset hw stats for the port */
-static int
-lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
- int ret;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
- if (ret != 0) {
- lio_dev_err(lio_dev, "Failed to send clear stats command\n");
- return ret;
- }
-
- ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
- if (ret != 0) {
- lio_dev_err(lio_dev, "Clear stats command timed out\n");
- return ret;
- }
-
- /* clear stored per queue stats */
- if (*eth_dev->dev_ops->stats_reset == NULL)
- return 0;
- return (*eth_dev->dev_ops->stats_reset)(eth_dev);
-}
-
-/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
-static int
-lio_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_droq_stats *oq_stats;
- struct lio_iq_stats *iq_stats;
- struct lio_instr_queue *txq;
- struct lio_droq *droq;
- int i, iq_no, oq_no;
- uint64_t bytes = 0;
- uint64_t pkts = 0;
- uint64_t drop = 0;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- iq_no = lio_dev->linfo.txpciq[i].s.q_no;
- txq = lio_dev->instr_queue[iq_no];
- if (txq != NULL) {
- iq_stats = &txq->stats;
- pkts += iq_stats->tx_done;
- drop += iq_stats->tx_dropped;
- bytes += iq_stats->tx_tot_bytes;
- }
- }
-
- stats->opackets = pkts;
- stats->obytes = bytes;
- stats->oerrors = drop;
-
- pkts = 0;
- drop = 0;
- bytes = 0;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
- droq = lio_dev->droq[oq_no];
- if (droq != NULL) {
- oq_stats = &droq->stats;
- pkts += oq_stats->rx_pkts_received;
- drop += (oq_stats->rx_dropped +
- oq_stats->dropped_toomany +
- oq_stats->dropped_nomem);
- bytes += oq_stats->rx_bytes_received;
- }
- }
- stats->ibytes = bytes;
- stats->ipackets = pkts;
- stats->ierrors = drop;
-
- return 0;
-}
-
-static int
-lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_droq_stats *oq_stats;
- struct lio_iq_stats *iq_stats;
- struct lio_instr_queue *txq;
- struct lio_droq *droq;
- int i, iq_no, oq_no;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- iq_no = lio_dev->linfo.txpciq[i].s.q_no;
- txq = lio_dev->instr_queue[iq_no];
- if (txq != NULL) {
- iq_stats = &txq->stats;
- memset(iq_stats, 0, sizeof(struct lio_iq_stats));
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
- droq = lio_dev->droq[oq_no];
- if (droq != NULL) {
- oq_stats = &droq->stats;
- memset(oq_stats, 0, sizeof(struct lio_droq_stats));
- }
- }
-
- return 0;
-}
-
-static int
-lio_dev_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *devinfo)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- switch (pci_dev->id.subsystem_device_id) {
- /* CN23xx 10G cards */
- case PCI_SUBSYS_DEV_ID_CN2350_210:
- case PCI_SUBSYS_DEV_ID_CN2360_210:
- case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
- case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
- case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
- case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
- break;
- /* CN23xx 25G cards */
- case PCI_SUBSYS_DEV_ID_CN2350_225:
- case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
- break;
- default:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
- lio_dev_err(lio_dev,
- "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
- return -EINVAL;
- }
-
- devinfo->max_rx_queues = lio_dev->max_rx_queues;
- devinfo->max_tx_queues = lio_dev->max_tx_queues;
-
- devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
- devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
-
- devinfo->max_mac_addrs = 1;
-
- devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
- RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
-
- devinfo->rx_desc_lim = lio_rx_desc_lim;
- devinfo->tx_desc_lim = lio_tx_desc_lim;
-
- devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
- devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
- RTE_ETH_RSS_NONFRAG_IPV4_TCP |
- RTE_ETH_RSS_IPV6 |
- RTE_ETH_RSS_NONFRAG_IPV6_TCP |
- RTE_ETH_RSS_IPV6_EX |
- RTE_ETH_RSS_IPV6_TCP_EX);
- return 0;
-}
-
-static int
-lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- PMD_INIT_FUNC_TRACE();
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
- ctrl_pkt.ncmd.s.param1 = mtu;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Command to change MTU timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct lio_rss_set *rss_param;
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
- int i, j, index;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
- lio_dev_err(lio_dev,
- "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
- reta_size, LIO_RSS_MAX_TABLE_SZ);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
- ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- rss_param->param.flags = 0xF;
- rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
- rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
-
- for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
- rss_state->itable[index] = reta_conf[i].reta[j];
- }
- }
- }
-
- rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
- memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
-
- lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to set rss hash\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Set rss hash timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- int i, num;
-
- if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
- lio_dev_err(lio_dev,
- "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
- reta_size, LIO_RSS_MAX_TABLE_SZ);
- return -EINVAL;
- }
-
- num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
-
- for (i = 0; i < num; i++) {
- memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
- RTE_ETH_RETA_GROUP_SIZE);
- reta_conf++;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- uint8_t *hash_key = NULL;
- uint64_t rss_hf = 0;
-
- if (rss_state->hash_disable) {
- lio_dev_info(lio_dev, "RSS disabled in nic\n");
- rss_conf->rss_hf = 0;
- return 0;
- }
-
- /* Get key value */
- hash_key = rss_conf->rss_key;
- if (hash_key != NULL)
- memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
-
- if (rss_state->ip)
- rss_hf |= RTE_ETH_RSS_IPV4;
- if (rss_state->tcp_hash)
- rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
- if (rss_state->ipv6)
- rss_hf |= RTE_ETH_RSS_IPV6;
- if (rss_state->ipv6_tcp_hash)
- rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
- if (rss_state->ipv6_ex)
- rss_hf |= RTE_ETH_RSS_IPV6_EX;
- if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
-
- rss_conf->rss_hf = rss_hf;
-
- return 0;
-}
-
-static int
-lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct lio_rss_set *rss_param;
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
- ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- rss_param->param.flags = 0xF;
-
- if (rss_conf->rss_key) {
- rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
- rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
- memcpy(rss_state->hash_key, rss_conf->rss_key,
- rss_state->hash_key_size);
- memcpy(rss_param->key, rss_state->hash_key,
- rss_state->hash_key_size);
- }
-
- if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
- /* Can't disable rss through hash flags,
- * if it is enabled by default during init
- */
- if (!rss_state->hash_disable)
- return -EINVAL;
-
- /* This is for --disable-rss during testpmd launch */
- rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
- } else {
- uint32_t hashinfo = 0;
-
- /* Can't enable rss if disabled by default during init */
- if (rss_state->hash_disable)
- return -EINVAL;
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
- hashinfo |= LIO_RSS_HASH_IPV4;
- rss_state->ip = 1;
- } else {
- rss_state->ip = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV4;
- rss_state->tcp_hash = 1;
- } else {
- rss_state->tcp_hash = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
- hashinfo |= LIO_RSS_HASH_IPV6;
- rss_state->ipv6 = 1;
- } else {
- rss_state->ipv6 = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV6;
- rss_state->ipv6_tcp_hash = 1;
- } else {
- rss_state->ipv6_tcp_hash = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
- hashinfo |= LIO_RSS_HASH_IPV6_EX;
- rss_state->ipv6_ex = 1;
- } else {
- rss_state->ipv6_ex = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
- rss_state->ipv6_tcp_ex_hash = 1;
- } else {
- rss_state->ipv6_tcp_ex_hash = 0;
- }
-
- rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
- rss_param->param.hashinfo = hashinfo;
- }
-
- lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to set rss hash\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Set rss hash timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Add vxlan dest udp port for an interface.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param udp_tnl
- * udp tunnel conf
- *
- * @return
- * On success return 0
- * On failure return -1
- */
-static int
-lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *udp_tnl)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (udp_tnl == NULL)
- return -EINVAL;
-
- if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
- lio_dev_err(lio_dev, "Unsupported tunnel type\n");
- return -1;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
- ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
- ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Remove vxlan dest udp port for an interface.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param udp_tnl
- * udp tunnel conf
- *
- * @return
- * On success return 0
- * On failure return -1
- */
-static int
-lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *udp_tnl)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (udp_tnl == NULL)
- return -EINVAL;
-
- if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
- lio_dev_err(lio_dev, "Unsupported tunnel type\n");
- return -1;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
- ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
- ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (lio_dev->linfo.vlan_is_admin_assigned)
- return -EPERM;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = on ?
- LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
- ctrl_pkt.ncmd.s.param1 = vlan_id;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
- on ? "add" : "remove");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
- on ? "add" : "remove");
- return -1;
- }
-
- return 0;
-}
-
-static uint64_t
-lio_hweight64(uint64_t w)
-{
- uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
-
- res =
- (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
- res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
- res = res + (res >> 8);
- res = res + (res >> 16);
-
- return (res + (res >> 32)) & 0x00000000000000FFul;
-}
-
-static int
-lio_dev_link_update(struct rte_eth_dev *eth_dev,
- int wait_to_complete __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_eth_link link;
-
- /* Initialize */
- memset(&link, 0, sizeof(link));
- link.link_status = RTE_ETH_LINK_DOWN;
- link.link_speed = RTE_ETH_SPEED_NUM_NONE;
- link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- /* Return what we found */
- if (lio_dev->linfo.link.s.link_up == 0) {
- /* Interface is down */
- return rte_eth_linkstatus_set(eth_dev, &link);
- }
-
- link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
- link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- switch (lio_dev->linfo.link.s.speed) {
- case LIO_LINK_SPEED_10000:
- link.link_speed = RTE_ETH_SPEED_NUM_10G;
- break;
- case LIO_LINK_SPEED_25000:
- link.link_speed = RTE_ETH_SPEED_NUM_25G;
- break;
- default:
- link.link_speed = RTE_ETH_SPEED_NUM_NONE;
- link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
- }
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-/**
- * \brief Net device enable, disable allmulticast
- * @param eth_dev Pointer to the structure rte_eth_dev
- *
- * @return
- * On success return 0
- * On failure return negative errno
- */
-static int
-lio_change_dev_flag(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- /* Create a ctrl pkt command to be sent to core app. */
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
- ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send change flag message\n");
- return -EAGAIN;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Change dev flag command timed out\n");
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-static int
-lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_VF_TRUST_MIN_VERSION);
- return -EAGAIN;
- }
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_VF_TRUST_MIN_VERSION);
- return -EAGAIN;
- }
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
- return lio_change_dev_flag(eth_dev);
-}
-
-static void
-lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct rte_eth_rss_reta_entry64 reta_conf[8];
- struct rte_eth_rss_conf rss_conf;
- uint16_t i;
-
- /* Configure the RSS key and the RSS protocols used to compute
- * the RSS hash of input packets.
- */
- rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
- if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
- rss_state->hash_disable = 1;
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
- return;
- }
-
- if (rss_conf.rss_key == NULL)
- rss_conf.rss_key = lio_rss_key; /* Default hash key */
-
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
-
- memset(reta_conf, 0, sizeof(reta_conf));
- for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
- uint8_t q_idx, conf_idx, reta_idx;
-
- q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
- i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
- reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
- reta_conf[conf_idx].reta[reta_idx] = q_idx;
- reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
- }
-
- lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
-}
-
-static void
-lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct rte_eth_rss_conf rss_conf;
-
- switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case RTE_ETH_MQ_RX_RSS:
- lio_dev_rss_configure(eth_dev);
- break;
- case RTE_ETH_MQ_RX_NONE:
- /* if mq_mode is none, disable rss mode. */
- default:
- memset(&rss_conf, 0, sizeof(rss_conf));
- rss_state->hash_disable = 1;
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
- }
-}
-
-/**
- * Setup our receive queue/ringbuffer. This is the
- * queue the Octeon uses to send us packets and
- * responses. We are given a memory pool for our
- * packet buffers that are used to populate the receive
- * queue.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param q_no
- * Queue number
- * @param num_rx_descs
- * Number of entries in the queue
- * @param socket_id
- * Where to allocate memory
- * @param rx_conf
- * Pointer to the struction rte_eth_rxconf
- * @param mp
- * Pointer to the packet pool
- *
- * @return
- * - On success, return 0
- * - On failure, return -1
- */
-static int
-lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
- uint16_t num_rx_descs, unsigned int socket_id,
- const struct rte_eth_rxconf *rx_conf __rte_unused,
- struct rte_mempool *mp)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_pktmbuf_pool_private *mbp_priv;
- uint32_t fw_mapped_oq;
- uint16_t buf_size;
-
- if (q_no >= lio_dev->nb_rx_queues) {
- lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
- return -EINVAL;
- }
-
- lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
-
- fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
-
- /* Free previous allocation if any */
- if (eth_dev->data->rx_queues[q_no] != NULL) {
- lio_dev_rx_queue_release(eth_dev, q_no);
- eth_dev->data->rx_queues[q_no] = NULL;
- }
-
- mbp_priv = rte_mempool_get_priv(mp);
- buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
- socket_id)) {
- lio_dev_err(lio_dev, "droq allocation failed\n");
- return -1;
- }
-
- eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
-
- return 0;
-}
-
-/**
- * Release the receive queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- * Pointer to Ethernet device structure.
- * @param q_no
- * Receive queue index.
- *
- * @return
- * - nothing
- */
-void
-lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
- struct lio_droq *droq = dev->data->rx_queues[q_no];
- int oq_no;
-
- if (droq) {
- oq_no = droq->q_no;
- lio_delete_droq_queue(droq->lio_dev, oq_no);
- }
-}
-
-/**
- * Allocate and initialize SW ring. Initialize associated HW registers.
- *
- * @param eth_dev
- * Pointer to structure rte_eth_dev
- *
- * @param q_no
- * Queue number
- *
- * @param num_tx_descs
- * Number of ringbuffer descriptors
- *
- * @param socket_id
- * NUMA socket id, used for memory allocations
- *
- * @param tx_conf
- * Pointer to the structure rte_eth_txconf
- *
- * @return
- * - On success, return 0
- * - On failure, return -errno value
- */
-static int
-lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
- uint16_t num_tx_descs, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
- int retval;
-
- if (q_no >= lio_dev->nb_tx_queues) {
- lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
- return -EINVAL;
- }
-
- lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
-
- /* Free previous allocation if any */
- if (eth_dev->data->tx_queues[q_no] != NULL) {
- lio_dev_tx_queue_release(eth_dev, q_no);
- eth_dev->data->tx_queues[q_no] = NULL;
- }
-
- retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
- num_tx_descs, lio_dev, socket_id);
-
- if (retval) {
- lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
- return retval;
- }
-
- retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
- lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
- socket_id);
-
- if (retval) {
- lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
- return retval;
- }
-
- eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
-
- return 0;
-}
-
-/**
- * Release the transmit queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- * Pointer to Ethernet device structure.
- * @param q_no
- * Transmit queue index.
- *
- * @return
- * - nothing
- */
-void
-lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
- struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
- uint32_t fw_mapped_iq_no;
-
-
- if (tq) {
- /* Free sg_list */
- lio_delete_sglist(tq);
-
- fw_mapped_iq_no = tq->txpciq.s.q_no;
- lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
- }
-}
-
-/**
- * Api to check link state.
- */
-static void
-lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- struct lio_link_status_resp *resp;
- union octeon_link_status *ls;
- struct lio_soft_command *sc;
- uint32_t resp_size;
-
- if (!lio_dev->intf_open)
- return;
-
- resp_size = sizeof(struct lio_link_status_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return;
-
- resp = (struct lio_link_status_resp *)sc->virtrptr;
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_INFO, 0, 0, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
- goto get_status_fail;
-
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- rte_delay_ms(1);
- }
-
- if (resp->status)
- goto get_status_fail;
-
- ls = &resp->link_info.link;
-
- lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
-
- if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
- if (ls->s.mtu < eth_dev->data->mtu) {
- lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
- ls->s.mtu);
- eth_dev->data->mtu = ls->s.mtu;
- }
- lio_dev->linfo.link.link_status64 = ls->link_status64;
- lio_dev_link_update(eth_dev, 0);
- }
-
- lio_free_soft_command(sc);
-
- return;
-
-get_status_fail:
- lio_free_soft_command(sc);
-}
-
-/* This function will be invoked every LSC_TIMEOUT ns (100ms)
- * and will update link state if it changes.
- */
-static void
-lio_sync_link_state_check(void *eth_dev)
-{
- struct lio_device *lio_dev =
- (((struct rte_eth_dev *)eth_dev)->data->dev_private);
-
- if (lio_dev->port_configured)
- lio_dev_get_link_status(eth_dev);
-
- /* Schedule periodic link status check.
- * Stop check if interface is close and start again while opening.
- */
- if (lio_dev->intf_open)
- rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
- eth_dev);
-}
-
-static int
-lio_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- int ret = 0;
-
- lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
-
- if (lio_dev->fn_list.enable_io_queues(lio_dev))
- return -1;
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 1))
- return -1;
-
- /* Ready for link status updates */
- lio_dev->intf_open = 1;
- rte_mb();
-
- /* Configure RSS if device configured with multiple RX queues. */
- lio_dev_mq_rx_configure(eth_dev);
-
- /* Before update the link info,
- * must set linfo.link.link_status64 to 0.
- */
- lio_dev->linfo.link.link_status64 = 0;
-
- /* start polling for lsc */
- ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
- lio_sync_link_state_check,
- eth_dev);
- if (ret) {
- lio_dev_err(lio_dev,
- "link state check handler creation failed\n");
- goto dev_lsc_handle_error;
- }
-
- while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
- rte_delay_ms(1);
-
- if (lio_dev->linfo.link.link_status64 == 0) {
- ret = -1;
- goto dev_mtu_set_error;
- }
-
- ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
- if (ret != 0)
- goto dev_mtu_set_error;
-
- return 0;
-
-dev_mtu_set_error:
- rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
-dev_lsc_handle_error:
- lio_dev->intf_open = 0;
- lio_send_rx_ctrl_cmd(eth_dev, 0);
-
- return ret;
-}
-
-/* Stop device and disable input/output functions */
-static int
-lio_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
- eth_dev->data->dev_started = 0;
- lio_dev->intf_open = 0;
- rte_mb();
-
- /* Cancel callback if still running. */
- rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
- lio_send_rx_ctrl_cmd(eth_dev, 0);
-
- lio_wait_for_instr_fetch(lio_dev);
-
- /* Clear recorded link status */
- lio_dev->linfo.link.link_status64 = 0;
-
- return 0;
-}
-
-static int
-lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
- return 0;
- }
-
- if (lio_dev->linfo.link.s.link_up) {
- lio_dev_info(lio_dev, "Link is already UP\n");
- return 0;
- }
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
- lio_dev_err(lio_dev, "Unable to set Link UP\n");
- return -1;
- }
-
- lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
- return 0;
-}
-
-static int
-lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
- return 0;
- }
-
- if (!lio_dev->linfo.link.s.link_up) {
- lio_dev_info(lio_dev, "Link is already DOWN\n");
- return 0;
- }
-
- lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
- lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
- lio_dev_err(lio_dev, "Unable to set Link Down\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Reset and stop the device. This occurs on the first
- * call to this routine. Subsequent calls will simply
- * return. NB: This will require the NIC to be rebooted.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- *
- * @return
- * - nothing
- */
-static int
-lio_dev_close(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int ret = 0;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
-
- if (lio_dev->intf_open)
- ret = lio_dev_stop(eth_dev);
-
- /* Reset ioq regs */
- lio_dev->fn_list.setup_device_regs(lio_dev);
-
- if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
- cn23xx_vf_ask_pf_to_do_flr(lio_dev);
- rte_delay_ms(LIO_PCI_FLR_WAIT);
- }
-
- /* lio_free_mbox */
- lio_dev->fn_list.free_mbox(lio_dev);
-
- /* Free glist resources */
- rte_free(lio_dev->glist_head);
- rte_free(lio_dev->glist_lock);
- lio_dev->glist_head = NULL;
- lio_dev->glist_lock = NULL;
-
- lio_dev->port_configured = 0;
-
- /* Delete all queues */
- lio_dev_clear_queues(eth_dev);
-
- return ret;
-}
-
-/**
- * Enable tunnel rx checksum verification from firmware.
- */
-static void
-lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
- ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
- return;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
- lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
-}
-
-/**
- * Enable checksum calculation for inner packet in a tunnel.
- */
-static void
-lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
- ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
- return;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
- lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
-}
-
-static int
-lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
- int num_rxq)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_Q_RECONF_MIN_VERSION);
- return -ENOTSUP;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
- ctrl_pkt.ncmd.s.param1 = num_txq;
- ctrl_pkt.ncmd.s.param2 = num_rxq;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send queue count control command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Queue count control command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int ret;
-
- if (lio_dev->nb_rx_queues != num_rxq ||
- lio_dev->nb_tx_queues != num_txq) {
- if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
- return -1;
- lio_dev->nb_rx_queues = num_rxq;
- lio_dev->nb_tx_queues = num_txq;
- }
-
- if (lio_dev->intf_open) {
- ret = lio_dev_stop(eth_dev);
- if (ret != 0)
- return ret;
- }
-
- /* Reset ioq registers */
- if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to configure device registers\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- int retval, num_iqueues, num_oqueues;
- uint8_t mac[RTE_ETHER_ADDR_LEN], i;
- struct lio_if_cfg_resp *resp;
- struct lio_soft_command *sc;
- union lio_if_cfg if_cfg;
- uint32_t resp_size;
-
- PMD_INIT_FUNC_TRACE();
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
- eth_dev->data->dev_conf.rxmode.offloads |=
- RTE_ETH_RX_OFFLOAD_RSS_HASH;
-
- /* Inform firmware about change in number of queues to use.
- * Disable IO queues and reset registers for re-configuration.
- */
- if (lio_dev->port_configured)
- return lio_reconf_queues(eth_dev,
- eth_dev->data->nb_tx_queues,
- eth_dev->data->nb_rx_queues);
-
- lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
- lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
-
- /* Set max number of queues which can be re-configured. */
- lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
- lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
-
- resp_size = sizeof(struct lio_if_cfg_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return -ENOMEM;
-
- resp = (struct lio_if_cfg_resp *)sc->virtrptr;
-
- /* Firmware doesn't have capability to reconfigure the queues,
- * Claim all queues, and use as many required
- */
- if_cfg.if_cfg64 = 0;
- if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
- if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
- if_cfg.s.base_queue = 0;
-
- if_cfg.s.gmx_port_id = lio_dev->pf_num;
-
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_IF_CFG, 0,
- if_cfg.if_cfg64, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
- retval);
- /* Soft instr is freed by driver in case of failure. */
- goto nic_config_fail;
- }
-
- /* Sleep on a wait queue till the cond flag indicates that the
- * response arrived or timed-out.
- */
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- lio_process_ordered_list(lio_dev);
- rte_delay_ms(1);
- }
-
- retval = resp->status;
- if (retval) {
- lio_dev_err(lio_dev, "iq/oq config failed\n");
- goto nic_config_fail;
- }
-
- strlcpy(lio_dev->firmware_version,
- resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
-
- lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
- sizeof(struct octeon_if_cfg_info) >> 3);
-
- num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
- num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
-
- if (!(num_iqueues) || !(num_oqueues)) {
- lio_dev_err(lio_dev,
- "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
- (unsigned long)resp->cfg_info.iqmask,
- (unsigned long)resp->cfg_info.oqmask);
- goto nic_config_fail;
- }
-
- lio_dev_dbg(lio_dev,
- "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
- eth_dev->data->port_id,
- (unsigned long)resp->cfg_info.iqmask,
- (unsigned long)resp->cfg_info.oqmask,
- num_iqueues, num_oqueues);
-
- lio_dev->linfo.num_rxpciq = num_oqueues;
- lio_dev->linfo.num_txpciq = num_iqueues;
-
- for (i = 0; i < num_oqueues; i++) {
- lio_dev->linfo.rxpciq[i].rxpciq64 =
- resp->cfg_info.linfo.rxpciq[i].rxpciq64;
- lio_dev_dbg(lio_dev, "index %d OQ %d\n",
- i, lio_dev->linfo.rxpciq[i].s.q_no);
- }
-
- for (i = 0; i < num_iqueues; i++) {
- lio_dev->linfo.txpciq[i].txpciq64 =
- resp->cfg_info.linfo.txpciq[i].txpciq64;
- lio_dev_dbg(lio_dev, "index %d IQ %d\n",
- i, lio_dev->linfo.txpciq[i].s.q_no);
- }
-
- lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
- lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
- lio_dev->linfo.link.link_status64 =
- resp->cfg_info.linfo.link.link_status64;
-
- /* 64-bit swap required on LE machines */
- lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
- for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
- mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
- 2 + i));
-
- /* Copy the permanent MAC address */
- rte_ether_addr_copy((struct rte_ether_addr *)mac,
- ð_dev->data->mac_addrs[0]);
-
- /* enable firmware checksum support for tunnel packets */
- lio_enable_hw_tunnel_rx_checksum(eth_dev);
- lio_enable_hw_tunnel_tx_checksum(eth_dev);
-
- lio_dev->glist_lock =
- rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
- if (lio_dev->glist_lock == NULL)
- return -ENOMEM;
-
- lio_dev->glist_head =
- rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
- 0);
- if (lio_dev->glist_head == NULL) {
- rte_free(lio_dev->glist_lock);
- lio_dev->glist_lock = NULL;
- return -ENOMEM;
- }
-
- lio_dev_link_update(eth_dev, 0);
-
- lio_dev->port_configured = 1;
-
- lio_free_soft_command(sc);
-
- /* Reset ioq regs */
- lio_dev->fn_list.setup_device_regs(lio_dev);
-
- /* Free iq_0 used during init */
- lio_free_instr_queue0(lio_dev);
-
- return 0;
-
-nic_config_fail:
- lio_dev_err(lio_dev, "Failed retval %d\n", retval);
- lio_free_soft_command(sc);
- lio_free_instr_queue0(lio_dev);
-
- return -ENODEV;
-}
-
-/* Define our ethernet definitions */
-static const struct eth_dev_ops liovf_eth_dev_ops = {
- .dev_configure = lio_dev_configure,
- .dev_start = lio_dev_start,
- .dev_stop = lio_dev_stop,
- .dev_set_link_up = lio_dev_set_link_up,
- .dev_set_link_down = lio_dev_set_link_down,
- .dev_close = lio_dev_close,
- .promiscuous_enable = lio_dev_promiscuous_enable,
- .promiscuous_disable = lio_dev_promiscuous_disable,
- .allmulticast_enable = lio_dev_allmulticast_enable,
- .allmulticast_disable = lio_dev_allmulticast_disable,
- .link_update = lio_dev_link_update,
- .stats_get = lio_dev_stats_get,
- .xstats_get = lio_dev_xstats_get,
- .xstats_get_names = lio_dev_xstats_get_names,
- .stats_reset = lio_dev_stats_reset,
- .xstats_reset = lio_dev_xstats_reset,
- .dev_infos_get = lio_dev_info_get,
- .vlan_filter_set = lio_dev_vlan_filter_set,
- .rx_queue_setup = lio_dev_rx_queue_setup,
- .rx_queue_release = lio_dev_rx_queue_release,
- .tx_queue_setup = lio_dev_tx_queue_setup,
- .tx_queue_release = lio_dev_tx_queue_release,
- .reta_update = lio_dev_rss_reta_update,
- .reta_query = lio_dev_rss_reta_query,
- .rss_hash_conf_get = lio_dev_rss_hash_conf_get,
- .rss_hash_update = lio_dev_rss_hash_update,
- .udp_tunnel_port_add = lio_dev_udp_tunnel_add,
- .udp_tunnel_port_del = lio_dev_udp_tunnel_del,
- .mtu_set = lio_dev_mtu_set,
-};
-
-static void
-lio_check_pf_hs_response(void *lio_dev)
-{
- struct lio_device *dev = lio_dev;
-
- /* check till response arrives */
- if (dev->pfvf_hsword.coproc_tics_per_us)
- return;
-
- cn23xx_vf_handle_mbox(dev);
-
- rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
-}
-
-/**
- * \brief Identify the LIO device and to map the BAR address space
- * @param lio_dev lio device
- */
-static int
-lio_chip_specific_setup(struct lio_device *lio_dev)
-{
- struct rte_pci_device *pdev = lio_dev->pci_dev;
- uint32_t dev_id = pdev->id.device_id;
- const char *s;
- int ret = 1;
-
- switch (dev_id) {
- case LIO_CN23XX_VF_VID:
- lio_dev->chip_id = LIO_CN23XX_VF_VID;
- ret = cn23xx_vf_setup_device(lio_dev);
- s = "CN23XX VF";
- break;
- default:
- s = "?";
- lio_dev_err(lio_dev, "Unsupported Chip\n");
- }
-
- if (!ret)
- lio_dev_info(lio_dev, "DEVICE : %s\n", s);
-
- return ret;
-}
-
-static int
-lio_first_time_init(struct lio_device *lio_dev,
- struct rte_pci_device *pdev)
-{
- int dpdk_queues;
-
- PMD_INIT_FUNC_TRACE();
-
- /* set dpdk specific pci device pointer */
- lio_dev->pci_dev = pdev;
-
- /* Identify the LIO type and set device ops */
- if (lio_chip_specific_setup(lio_dev)) {
- lio_dev_err(lio_dev, "Chip specific setup failed\n");
- return -1;
- }
-
- /* Initialize soft command buffer pool */
- if (lio_setup_sc_buffer_pool(lio_dev)) {
- lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
- return -1;
- }
-
- /* Initialize lists to manage the requests of different types that
- * arrive from applications for this lio device.
- */
- lio_setup_response_list(lio_dev);
-
- if (lio_dev->fn_list.setup_mbox(lio_dev)) {
- lio_dev_err(lio_dev, "Mailbox setup failed\n");
- goto error;
- }
-
- /* Check PF response */
- lio_check_pf_hs_response((void *)lio_dev);
-
- /* Do handshake and exit if incompatible PF driver */
- if (cn23xx_pfvf_handshake(lio_dev))
- goto error;
-
- /* Request and wait for device reset. */
- if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
- cn23xx_vf_ask_pf_to_do_flr(lio_dev);
- /* FLR wait time doubled as a precaution. */
- rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
- }
-
- if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to configure device registers\n");
- goto error;
- }
-
- if (lio_setup_instr_queue0(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
- goto error;
- }
-
- dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
-
- lio_dev->max_tx_queues = dpdk_queues;
- lio_dev->max_rx_queues = dpdk_queues;
-
- /* Enable input and output queues for this device */
- if (lio_dev->fn_list.enable_io_queues(lio_dev))
- goto error;
-
- return 0;
-
-error:
- lio_free_sc_buffer_pool(lio_dev);
- if (lio_dev->mbox[0])
- lio_dev->fn_list.free_mbox(lio_dev);
- if (lio_dev->instr_queue[0])
- lio_free_instr_queue0(lio_dev);
-
- return -1;
-}
-
-static int
-lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- PMD_INIT_FUNC_TRACE();
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* lio_free_sc_buffer_pool */
- lio_free_sc_buffer_pool(lio_dev);
-
- return 0;
-}
-
-static int
-lio_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- PMD_INIT_FUNC_TRACE();
-
- eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
- eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
-
- /* Primary does the initialization. */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- rte_eth_copy_pci_info(eth_dev, pdev);
-
- if (pdev->mem_resource[0].addr) {
- lio_dev->hw_addr = pdev->mem_resource[0].addr;
- } else {
- PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
- return -ENODEV;
- }
-
- lio_dev->eth_dev = eth_dev;
- /* set lio device print string */
- snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
- "%s[%02x:%02x.%x]", pdev->driver->driver.name,
- pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
-
- lio_dev->port_id = eth_dev->data->port_id;
-
- if (lio_first_time_init(lio_dev, pdev)) {
- lio_dev_err(lio_dev, "Device init failed\n");
- return -EINVAL;
- }
-
- eth_dev->dev_ops = &liovf_eth_dev_ops;
- eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- lio_dev_err(lio_dev,
- "MAC addresses memory allocation failed\n");
- eth_dev->dev_ops = NULL;
- eth_dev->rx_pkt_burst = NULL;
- eth_dev->tx_pkt_burst = NULL;
- return -ENOMEM;
- }
-
- rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
- rte_wmb();
-
- lio_dev->port_configured = 0;
- /* Always allow unicast packets */
- lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
-
- return 0;
-}
-
-static int
-lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
- lio_eth_dev_init);
-}
-
-static int
-lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
-{
- return rte_eth_dev_pci_generic_remove(pci_dev,
- lio_eth_dev_uninit);
-}
-
-/* Set of PCI devices this driver supports */
-static const struct rte_pci_id pci_id_liovf_map[] = {
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
- { .vendor_id = 0, /* sentinel */ }
-};
-
-static struct rte_pci_driver rte_liovf_pmd = {
- .id_table = pci_id_liovf_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = lio_eth_dev_pci_probe,
- .remove = lio_eth_dev_pci_remove,
-};
-
-RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
deleted file mode 100644
index ece2b03858..0000000000
--- a/drivers/net/liquidio/lio_ethdev.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_ETHDEV_H_
-#define _LIO_ETHDEV_H_
-
-#include <stdint.h>
-
-#include "lio_struct.h"
-
-/* timeout to check link state updates from firmware in us */
-#define LIO_LSC_TIMEOUT 100000 /* 100000us (100ms) */
-#define LIO_MAX_CMD_TIMEOUT 10000 /* 10000ms (10s) */
-
-/* The max frame size with default MTU */
-#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
-
-#define LIO_DEV(_eth_dev) ((_eth_dev)->data->dev_private)
-
-/* LIO Response condition variable */
-struct lio_dev_ctrl_cmd {
- struct rte_eth_dev *eth_dev;
- uint64_t cond;
-};
-
-enum lio_bus_speed {
- LIO_LINK_SPEED_UNKNOWN = 0,
- LIO_LINK_SPEED_10000 = 10000,
- LIO_LINK_SPEED_25000 = 25000
-};
-
-struct octeon_if_cfg_info {
- uint64_t iqmask; /** mask for IQs enabled for the port */
- uint64_t oqmask; /** mask for OQs enabled for the port */
- struct octeon_link_info linfo; /** initial link information */
- char lio_firmware_version[LIO_FW_VERSION_LENGTH];
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_rx_stats {
- /* link-level stats */
- uint64_t total_rcvd;
- uint64_t bytes_rcvd;
- uint64_t total_bcst;
- uint64_t total_mcst;
- uint64_t runts;
- uint64_t ctl_rcvd;
- uint64_t fifo_err; /* Accounts for over/under-run of buffers */
- uint64_t dmac_drop;
- uint64_t fcs_err;
- uint64_t jabber_err;
- uint64_t l2_err;
- uint64_t frame_err;
-
- /* firmware stats */
- uint64_t fw_total_rcvd;
- uint64_t fw_total_fwd;
- uint64_t fw_total_fwd_bytes;
- uint64_t fw_err_pko;
- uint64_t fw_err_link;
- uint64_t fw_err_drop;
- uint64_t fw_rx_vxlan;
- uint64_t fw_rx_vxlan_err;
-
- /* LRO */
- uint64_t fw_lro_pkts; /* Number of packets that are LROed */
- uint64_t fw_lro_octs; /* Number of octets that are LROed */
- uint64_t fw_total_lro; /* Number of LRO packets formed */
- uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
- uint64_t fw_lro_aborts_port;
- uint64_t fw_lro_aborts_seq;
- uint64_t fw_lro_aborts_tsval;
- uint64_t fw_lro_aborts_timer;
- /* intrmod: packet forward rate */
- uint64_t fwd_rate;
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_tx_stats {
- /* link-level stats */
- uint64_t total_pkts_sent;
- uint64_t total_bytes_sent;
- uint64_t mcast_pkts_sent;
- uint64_t bcast_pkts_sent;
- uint64_t ctl_sent;
- uint64_t one_collision_sent; /* Packets sent after one collision */
- /* Packets sent after multiple collision */
- uint64_t multi_collision_sent;
- /* Packets not sent due to max collisions */
- uint64_t max_collision_fail;
- /* Packets not sent due to max deferrals */
- uint64_t max_deferral_fail;
- /* Accounts for over/under-run of buffers */
- uint64_t fifo_err;
- uint64_t runts;
- uint64_t total_collisions; /* Total number of collisions detected */
-
- /* firmware stats */
- uint64_t fw_total_sent;
- uint64_t fw_total_fwd;
- uint64_t fw_total_fwd_bytes;
- uint64_t fw_err_pko;
- uint64_t fw_err_link;
- uint64_t fw_err_drop;
- uint64_t fw_err_tso;
- uint64_t fw_tso; /* number of tso requests */
- uint64_t fw_tso_fwd; /* number of packets segmented in tso */
- uint64_t fw_tx_vxlan;
-};
-
-struct octeon_link_stats {
- struct octeon_rx_stats fromwire;
- struct octeon_tx_stats fromhost;
-};
-
-union lio_if_cfg {
- uint64_t if_cfg64;
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t base_queue : 16;
- uint64_t num_iqueues : 16;
- uint64_t num_oqueues : 16;
- uint64_t gmx_port_id : 8;
- uint64_t vf_id : 8;
-#else
- uint64_t vf_id : 8;
- uint64_t gmx_port_id : 8;
- uint64_t num_oqueues : 16;
- uint64_t num_iqueues : 16;
- uint64_t base_queue : 16;
-#endif
- } s;
-};
-
-struct lio_if_cfg_resp {
- uint64_t rh;
- struct octeon_if_cfg_info cfg_info;
- uint64_t status;
-};
-
-struct lio_link_stats_resp {
- uint64_t rh;
- struct octeon_link_stats link_stats;
- uint64_t status;
-};
-
-struct lio_link_status_resp {
- uint64_t rh;
- struct octeon_link_info link_info;
- uint64_t status;
-};
-
-struct lio_rss_set {
- struct param {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t flags : 16;
- uint64_t hashinfo : 32;
- uint64_t itablesize : 16;
- uint64_t hashkeysize : 16;
- uint64_t reserved : 48;
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t itablesize : 16;
- uint64_t hashinfo : 32;
- uint64_t flags : 16;
- uint64_t reserved : 48;
- uint64_t hashkeysize : 16;
-#endif
- } param;
-
- uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
- uint8_t key[LIO_RSS_MAX_KEY_SZ];
-};
-
-void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-#endif /* _LIO_ETHDEV_H_ */
diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
deleted file mode 100644
index f227827081..0000000000
--- a/drivers/net/liquidio/lio_logs.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_LOGS_H_
-#define _LIO_LOGS_H_
-
-extern int lio_logtype_driver;
-#define lio_dev_printf(lio_dev, level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, lio_logtype_driver, \
- "%s" fmt, (lio_dev)->dev_string, ##args)
-
-#define lio_dev_info(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
-
-#define lio_dev_err(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
-
-extern int lio_logtype_init;
-#define PMD_INIT_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, lio_logtype_init, \
- fmt, ## args)
-
-/* Enable these through config options */
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
-
-#define lio_dev_dbg(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_RX
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_TX
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
-#define PMD_MBOX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
-#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define PMD_REGS_LOG(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
-#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
-
-#endif /* _LIO_LOGS_H_ */
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
deleted file mode 100644
index e09798ddd7..0000000000
--- a/drivers/net/liquidio/lio_rxtx.c
+++ /dev/null
@@ -1,1804 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-#define LIO_MAX_SG 12
-/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
-#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
-#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
-
-static void
-lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
-{
- uint32_t count = 0;
-
- do {
- count += droq->buffer_size;
- } while (count < LIO_MAX_RX_PKTLEN);
-}
-
-static void
-lio_droq_reset_indices(struct lio_droq *droq)
-{
- droq->read_idx = 0;
- droq->write_idx = 0;
- droq->refill_idx = 0;
- droq->refill_count = 0;
- rte_atomic64_set(&droq->pkts_pending, 0);
-}
-
-static void
-lio_droq_destroy_ring_buffers(struct lio_droq *droq)
-{
- uint32_t i;
-
- for (i = 0; i < droq->nb_desc; i++) {
- if (droq->recv_buf_list[i].buffer) {
- rte_pktmbuf_free((struct rte_mbuf *)
- droq->recv_buf_list[i].buffer);
- droq->recv_buf_list[i].buffer = NULL;
- }
- }
-
- lio_droq_reset_indices(droq);
-}
-
-static int
-lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
- struct lio_droq *droq)
-{
- struct lio_droq_desc *desc_ring = droq->desc_ring;
- uint32_t i;
- void *buf;
-
- for (i = 0; i < droq->nb_desc; i++) {
- buf = rte_pktmbuf_alloc(droq->mpool);
- if (buf == NULL) {
- lio_dev_err(lio_dev, "buffer alloc failed\n");
- droq->stats.rx_alloc_failure++;
- lio_droq_destroy_ring_buffers(droq);
- return -ENOMEM;
- }
-
- droq->recv_buf_list[i].buffer = buf;
- droq->info_list[i].length = 0;
-
- /* map ring buffers into memory */
- desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
- desc_ring[i].buffer_ptr =
- lio_map_ring(droq->recv_buf_list[i].buffer);
- }
-
- lio_droq_reset_indices(droq);
-
- lio_droq_compute_max_packet_bufs(droq);
-
- return 0;
-}
-
-static void
-lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
-{
- const struct rte_memzone *mz_tmp;
- int ret = 0;
-
- if (mz == NULL) {
- lio_dev_err(lio_dev, "Memzone NULL\n");
- return;
- }
-
- mz_tmp = rte_memzone_lookup(mz->name);
- if (mz_tmp == NULL) {
- lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
- return;
- }
-
- ret = rte_memzone_free(mz);
- if (ret)
- lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
-}
-
-/**
- * Frees the space for descriptor ring for the droq.
- *
- * @param lio_dev - pointer to the lio device structure
- * @param q_no - droq no.
- */
-static void
-lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
-{
- struct lio_droq *droq = lio_dev->droq[q_no];
-
- lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
- lio_droq_destroy_ring_buffers(droq);
- rte_free(droq->recv_buf_list);
- droq->recv_buf_list = NULL;
- lio_dma_zone_free(lio_dev, droq->info_mz);
- lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
-
- memset(droq, 0, LIO_DROQ_SIZE);
-}
-
-static void *
-lio_alloc_info_buffer(struct lio_device *lio_dev,
- struct lio_droq *droq, unsigned int socket_id)
-{
- droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "info_list", droq->q_no,
- (droq->nb_desc *
- LIO_DROQ_INFO_SIZE),
- RTE_CACHE_LINE_SIZE,
- socket_id);
-
- if (droq->info_mz == NULL)
- return NULL;
-
- droq->info_list_dma = droq->info_mz->iova;
- droq->info_alloc_size = droq->info_mz->len;
- droq->info_base_addr = (size_t)droq->info_mz->addr;
-
- return droq->info_mz->addr;
-}
-
-/**
- * Allocates space for the descriptor ring for the droq and
- * sets the base addr, num desc etc in Octeon registers.
- *
- * @param lio_dev - pointer to the lio device structure
- * @param q_no - droq no.
- * @param app_ctx - pointer to application context
- * @return Success: 0 Failure: -1
- */
-static int
-lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
- uint32_t num_descs, uint32_t desc_size,
- struct rte_mempool *mpool, unsigned int socket_id)
-{
- uint32_t c_refill_threshold;
- uint32_t desc_ring_size;
- struct lio_droq *droq;
-
- lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
- droq = lio_dev->droq[q_no];
- droq->lio_dev = lio_dev;
- droq->q_no = q_no;
- droq->mpool = mpool;
-
- c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
-
- droq->nb_desc = num_descs;
- droq->buffer_size = desc_size;
-
- desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
- droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "droq", q_no,
- desc_ring_size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
-
- if (droq->desc_ring_mz == NULL) {
- lio_dev_err(lio_dev,
- "Output queue %d ring alloc failed\n", q_no);
- return -1;
- }
-
- droq->desc_ring_dma = droq->desc_ring_mz->iova;
- droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
-
- lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
- q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
- lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
- droq->nb_desc);
-
- droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
- if (droq->info_list == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
- goto init_droq_fail;
- }
-
- droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
- (droq->nb_desc *
- LIO_DROQ_RECVBUF_SIZE),
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (droq->recv_buf_list == NULL) {
- lio_dev_err(lio_dev,
- "Output queue recv buf list alloc failed\n");
- goto init_droq_fail;
- }
-
- if (lio_droq_setup_ring_buffers(lio_dev, droq))
- goto init_droq_fail;
-
- droq->refill_threshold = c_refill_threshold;
-
- rte_spinlock_init(&droq->lock);
-
- lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
-
- lio_dev->io_qmask.oq |= (1ULL << q_no);
-
- return 0;
-
-init_droq_fail:
- lio_delete_droq(lio_dev, q_no);
-
- return -1;
-}
-
-int
-lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
- int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
-{
- struct lio_droq *droq;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Allocate the DS for the new droq. */
- droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (droq == NULL)
- return -ENOMEM;
-
- lio_dev->droq[oq_no] = droq;
-
- /* Initialize the Droq */
- if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
- socket_id)) {
- lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
- rte_free(lio_dev->droq[oq_no]);
- lio_dev->droq[oq_no] = NULL;
- return -ENOMEM;
- }
-
- lio_dev->num_oqs++;
-
- lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
-
- /* Send credit for octeon output queues. credits are always
- * sent after the output queue is enabled.
- */
- rte_write32(lio_dev->droq[oq_no]->nb_desc,
- lio_dev->droq[oq_no]->pkts_credit_reg);
- rte_wmb();
-
- return 0;
-}
-
-static inline uint32_t
-lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
-{
- uint32_t buf_cnt = 0;
-
- while (total_len > (buf_size * buf_cnt))
- buf_cnt++;
-
- return buf_cnt;
-}
-
-/* If we were not able to refill all buffers, try to move around
- * the buffers that were not dispatched.
- */
-static inline uint32_t
-lio_droq_refill_pullup_descs(struct lio_droq *droq,
- struct lio_droq_desc *desc_ring)
-{
- uint32_t refill_index = droq->refill_idx;
- uint32_t desc_refilled = 0;
-
- while (refill_index != droq->read_idx) {
- if (droq->recv_buf_list[refill_index].buffer) {
- droq->recv_buf_list[droq->refill_idx].buffer =
- droq->recv_buf_list[refill_index].buffer;
- desc_ring[droq->refill_idx].buffer_ptr =
- desc_ring[refill_index].buffer_ptr;
- droq->recv_buf_list[refill_index].buffer = NULL;
- desc_ring[refill_index].buffer_ptr = 0;
- do {
- droq->refill_idx = lio_incr_index(
- droq->refill_idx, 1,
- droq->nb_desc);
- desc_refilled++;
- droq->refill_count--;
- } while (droq->recv_buf_list[droq->refill_idx].buffer);
- }
- refill_index = lio_incr_index(refill_index, 1,
- droq->nb_desc);
- } /* while */
-
- return desc_refilled;
-}
-
-/* lio_droq_refill
- *
- * @param droq - droq in which descriptors require new buffers.
- *
- * Description:
- * Called during normal DROQ processing in interrupt mode or by the poll
- * thread to refill the descriptors from which buffers were dispatched
- * to upper layers. Attempts to allocate new buffers. If that fails, moves
- * up buffers (that were not dispatched) to form a contiguous ring.
- *
- * Returns:
- * No of descriptors refilled.
- *
- * Locks:
- * This routine is called with droq->lock held.
- */
-static uint32_t
-lio_droq_refill(struct lio_droq *droq)
-{
- struct lio_droq_desc *desc_ring;
- uint32_t desc_refilled = 0;
- void *buf = NULL;
-
- desc_ring = droq->desc_ring;
-
- while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
- /* If a valid buffer exists (happens if there is no dispatch),
- * reuse the buffer, else allocate.
- */
- if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
- buf = rte_pktmbuf_alloc(droq->mpool);
- /* If a buffer could not be allocated, no point in
- * continuing
- */
- if (buf == NULL) {
- droq->stats.rx_alloc_failure++;
- break;
- }
-
- droq->recv_buf_list[droq->refill_idx].buffer = buf;
- }
-
- desc_ring[droq->refill_idx].buffer_ptr =
- lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
- /* Reset any previous values in the length field. */
- droq->info_list[droq->refill_idx].length = 0;
-
- droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
- droq->nb_desc);
- desc_refilled++;
- droq->refill_count--;
- }
-
- if (droq->refill_count)
- desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
-
- /* if droq->refill_count
- * The refill count would not change in pass two. We only moved buffers
- * to close the gap in the ring, but we would still have the same no. of
- * buffers to refill.
- */
- return desc_refilled;
-}
-
-static int
-lio_droq_fast_process_packet(struct lio_device *lio_dev,
- struct lio_droq *droq,
- struct rte_mbuf **rx_pkts)
-{
- struct rte_mbuf *nicbuf = NULL;
- struct lio_droq_info *info;
- uint32_t total_len = 0;
- int data_total_len = 0;
- uint32_t pkt_len = 0;
- union octeon_rh *rh;
- int data_pkts = 0;
-
- info = &droq->info_list[droq->read_idx];
- lio_swap_8B_data((uint64_t *)info, 2);
-
- if (!info->length)
- return -1;
-
- /* Len of resp hdr in included in the received data len. */
- info->length -= OCTEON_RH_SIZE;
- rh = &info->rh;
-
- total_len += (uint32_t)info->length;
-
- if (lio_opcode_slow_path(rh)) {
- uint32_t buf_cnt;
-
- buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
- (uint32_t)info->length);
- droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
- droq->nb_desc);
- droq->refill_count += buf_cnt;
- } else {
- if (info->length <= droq->buffer_size) {
- if (rh->r_dh.has_hash)
- pkt_len = (uint32_t)(info->length - 8);
- else
- pkt_len = (uint32_t)info->length;
-
- nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
- droq->recv_buf_list[droq->read_idx].buffer = NULL;
- droq->read_idx = lio_incr_index(
- droq->read_idx, 1,
- droq->nb_desc);
- droq->refill_count++;
-
- if (likely(nicbuf != NULL)) {
- /* We don't have a way to pass flags yet */
- nicbuf->ol_flags = 0;
- if (rh->r_dh.has_hash) {
- uint64_t *hash_ptr;
-
- nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- hash_ptr = rte_pktmbuf_mtod(nicbuf,
- uint64_t *);
- lio_swap_8B_data(hash_ptr, 1);
- nicbuf->hash.rss = (uint32_t)*hash_ptr;
- nicbuf->data_off += 8;
- }
-
- nicbuf->pkt_len = pkt_len;
- nicbuf->data_len = pkt_len;
- nicbuf->port = lio_dev->port_id;
- /* Store the mbuf */
- rx_pkts[data_pkts++] = nicbuf;
- data_total_len += pkt_len;
- }
-
- /* Prefetch buffer pointers when on a cache line
- * boundary
- */
- if ((droq->read_idx & 3) == 0) {
- rte_prefetch0(
- &droq->recv_buf_list[droq->read_idx]);
- rte_prefetch0(
- &droq->info_list[droq->read_idx]);
- }
- } else {
- struct rte_mbuf *first_buf = NULL;
- struct rte_mbuf *last_buf = NULL;
-
- while (pkt_len < info->length) {
- int cpy_len = 0;
-
- cpy_len = ((pkt_len + droq->buffer_size) >
- info->length)
- ? ((uint32_t)info->length -
- pkt_len)
- : droq->buffer_size;
-
- nicbuf =
- droq->recv_buf_list[droq->read_idx].buffer;
- droq->recv_buf_list[droq->read_idx].buffer =
- NULL;
-
- if (likely(nicbuf != NULL)) {
- /* Note the first seg */
- if (!pkt_len)
- first_buf = nicbuf;
-
- nicbuf->port = lio_dev->port_id;
- /* We don't have a way to pass
- * flags yet
- */
- nicbuf->ol_flags = 0;
- if ((!pkt_len) && (rh->r_dh.has_hash)) {
- uint64_t *hash_ptr;
-
- nicbuf->ol_flags |=
- RTE_MBUF_F_RX_RSS_HASH;
- hash_ptr = rte_pktmbuf_mtod(
- nicbuf, uint64_t *);
- lio_swap_8B_data(hash_ptr, 1);
- nicbuf->hash.rss =
- (uint32_t)*hash_ptr;
- nicbuf->data_off += 8;
- nicbuf->pkt_len = cpy_len - 8;
- nicbuf->data_len = cpy_len - 8;
- } else {
- nicbuf->pkt_len = cpy_len;
- nicbuf->data_len = cpy_len;
- }
-
- if (pkt_len)
- first_buf->nb_segs++;
-
- if (last_buf)
- last_buf->next = nicbuf;
-
- last_buf = nicbuf;
- } else {
- PMD_RX_LOG(lio_dev, ERR, "no buf\n");
- }
-
- pkt_len += cpy_len;
- droq->read_idx = lio_incr_index(
- droq->read_idx,
- 1, droq->nb_desc);
- droq->refill_count++;
-
- /* Prefetch buffer pointers when on a
- * cache line boundary
- */
- if ((droq->read_idx & 3) == 0) {
- rte_prefetch0(&droq->recv_buf_list
- [droq->read_idx]);
-
- rte_prefetch0(
- &droq->info_list[droq->read_idx]);
- }
- }
- rx_pkts[data_pkts++] = first_buf;
- if (rh->r_dh.has_hash)
- data_total_len += (pkt_len - 8);
- else
- data_total_len += pkt_len;
- }
-
- /* Inform upper layer about packet checksum verification */
- struct rte_mbuf *m = rx_pkts[data_pkts - 1];
-
- if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
- m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
- if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
- m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
-
- if (droq->refill_count >= droq->refill_threshold) {
- int desc_refilled = lio_droq_refill(droq);
-
- /* Flush the droq descriptor data to memory to be sure
- * that when we update the credits the data in memory is
- * accurate.
- */
- rte_wmb();
- rte_write32(desc_refilled, droq->pkts_credit_reg);
- /* make sure mmio write completes */
- rte_wmb();
- }
-
- info->length = 0;
- info->rh.rh64 = 0;
-
- droq->stats.pkts_received++;
- droq->stats.rx_pkts_received += data_pkts;
- droq->stats.rx_bytes_received += data_total_len;
- droq->stats.bytes_received += total_len;
-
- return data_pkts;
-}
-
-static uint32_t
-lio_droq_fast_process_packets(struct lio_device *lio_dev,
- struct lio_droq *droq,
- struct rte_mbuf **rx_pkts,
- uint32_t pkts_to_process)
-{
- int ret, data_pkts = 0;
- uint32_t pkt;
-
- for (pkt = 0; pkt < pkts_to_process; pkt++) {
- ret = lio_droq_fast_process_packet(lio_dev, droq,
- &rx_pkts[data_pkts]);
- if (ret < 0) {
- lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
- lio_dev->port_id, droq->q_no,
- droq->read_idx, pkts_to_process);
- break;
- }
- data_pkts += ret;
- }
-
- rte_atomic64_sub(&droq->pkts_pending, pkt);
-
- return data_pkts;
-}
-
-static inline uint32_t
-lio_droq_check_hw_for_pkts(struct lio_droq *droq)
-{
- uint32_t last_count;
- uint32_t pkt_count;
-
- pkt_count = rte_read32(droq->pkts_sent_reg);
-
- last_count = pkt_count - droq->pkt_count;
- droq->pkt_count = pkt_count;
-
- if (last_count)
- rte_atomic64_add(&droq->pkts_pending, last_count);
-
- return last_count;
-}
-
-uint16_t
-lio_dev_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t budget)
-{
- struct lio_droq *droq = rx_queue;
- struct lio_device *lio_dev = droq->lio_dev;
- uint32_t pkts_processed = 0;
- uint32_t pkt_count = 0;
-
- lio_droq_check_hw_for_pkts(droq);
-
- pkt_count = rte_atomic64_read(&droq->pkts_pending);
- if (!pkt_count)
- return 0;
-
- if (pkt_count > budget)
- pkt_count = budget;
-
- /* Grab the lock */
- rte_spinlock_lock(&droq->lock);
- pkts_processed = lio_droq_fast_process_packets(lio_dev,
- droq, rx_pkts,
- pkt_count);
-
- if (droq->pkt_count) {
- rte_write32(droq->pkt_count, droq->pkts_sent_reg);
- droq->pkt_count = 0;
- }
-
- /* Release the spin lock */
- rte_spinlock_unlock(&droq->lock);
-
- return pkts_processed;
-}
-
-void
-lio_delete_droq_queue(struct lio_device *lio_dev,
- int oq_no)
-{
- lio_delete_droq(lio_dev, oq_no);
- lio_dev->num_oqs--;
- rte_free(lio_dev->droq[oq_no]);
- lio_dev->droq[oq_no] = NULL;
-}
-
-/**
- * lio_init_instr_queue()
- * @param lio_dev - pointer to the lio device structure.
- * @param txpciq - queue to be initialized.
- *
- * Called at driver init time for each input queue. iq_conf has the
- * configuration parameters for the queue.
- *
- * @return Success: 0 Failure: -1
- */
-static int
-lio_init_instr_queue(struct lio_device *lio_dev,
- union octeon_txpciq txpciq,
- uint32_t num_descs, unsigned int socket_id)
-{
- uint32_t iq_no = (uint32_t)txpciq.s.q_no;
- struct lio_instr_queue *iq;
- uint32_t instr_type;
- uint32_t q_size;
-
- instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
-
- q_size = instr_type * num_descs;
- iq = lio_dev->instr_queue[iq_no];
- iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "instr_queue", iq_no, q_size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (iq->iq_mz == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
- iq_no);
- return -1;
- }
-
- iq->base_addr_dma = iq->iq_mz->iova;
- iq->base_addr = (uint8_t *)iq->iq_mz->addr;
-
- iq->nb_desc = num_descs;
-
- /* Initialize a list to holds requests that have been posted to Octeon
- * but has yet to be fetched by octeon
- */
- iq->request_list = rte_zmalloc_socket("request_list",
- sizeof(*iq->request_list) *
- num_descs,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (iq->request_list == NULL) {
- lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
- iq_no);
- lio_dma_zone_free(lio_dev, iq->iq_mz);
- return -1;
- }
-
- lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
- iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
- iq->nb_desc);
-
- iq->lio_dev = lio_dev;
- iq->txpciq.txpciq64 = txpciq.txpciq64;
- iq->fill_cnt = 0;
- iq->host_write_index = 0;
- iq->lio_read_index = 0;
- iq->flush_index = 0;
-
- rte_atomic64_set(&iq->instr_pending, 0);
-
- /* Initialize the spinlock for this instruction queue */
- rte_spinlock_init(&iq->lock);
- rte_spinlock_init(&iq->post_lock);
-
- rte_atomic64_clear(&iq->iq_flush_running);
-
- lio_dev->io_qmask.iq |= (1ULL << iq_no);
-
- /* Set the 32B/64B mode for each input queue */
- lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
- iq->iqcmd_64B = (instr_type == 64);
-
- lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
-
- return 0;
-}
-
-int
-lio_setup_instr_queue0(struct lio_device *lio_dev)
-{
- union octeon_txpciq txpciq;
- uint32_t num_descs = 0;
- uint32_t iq_no = 0;
-
- num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
-
- lio_dev->num_iqs = 0;
-
- lio_dev->instr_queue[0] = rte_zmalloc(NULL,
- sizeof(struct lio_instr_queue), 0);
- if (lio_dev->instr_queue[0] == NULL)
- return -ENOMEM;
-
- lio_dev->instr_queue[0]->q_index = 0;
- lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
- txpciq.txpciq64 = 0;
- txpciq.s.q_no = iq_no;
- txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
- txpciq.s.use_qpg = 0;
- txpciq.s.qpg = 0;
- if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
- rte_free(lio_dev->instr_queue[0]);
- lio_dev->instr_queue[0] = NULL;
- return -1;
- }
-
- lio_dev->num_iqs++;
-
- return 0;
-}
-
-/**
- * lio_delete_instr_queue()
- * @param lio_dev - pointer to the lio device structure.
- * @param iq_no - queue to be deleted.
- *
- * Called at driver unload time for each input queue. Deletes all
- * allocated resources for the input queue.
- */
-static void
-lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-
- rte_free(iq->request_list);
- iq->request_list = NULL;
- lio_dma_zone_free(lio_dev, iq->iq_mz);
-}
-
-void
-lio_free_instr_queue0(struct lio_device *lio_dev)
-{
- lio_delete_instr_queue(lio_dev, 0);
- rte_free(lio_dev->instr_queue[0]);
- lio_dev->instr_queue[0] = NULL;
- lio_dev->num_iqs--;
-}
-
-/* Return 0 on success, -1 on failure */
-int
-lio_setup_iq(struct lio_device *lio_dev, int q_index,
- union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
- unsigned int socket_id)
-{
- uint32_t iq_no = (uint32_t)txpciq.s.q_no;
-
- lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
- sizeof(struct lio_instr_queue),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lio_dev->instr_queue[iq_no] == NULL)
- return -1;
-
- lio_dev->instr_queue[iq_no]->q_index = q_index;
- lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
-
- if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
- rte_free(lio_dev->instr_queue[iq_no]);
- lio_dev->instr_queue[iq_no] = NULL;
- return -1;
- }
-
- lio_dev->num_iqs++;
-
- return 0;
-}
-
-int
-lio_wait_for_instr_fetch(struct lio_device *lio_dev)
-{
- int pending, instr_cnt;
- int i, retry = 1000;
-
- do {
- instr_cnt = 0;
-
- for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
- if (!(lio_dev->io_qmask.iq & (1ULL << i)))
- continue;
-
- if (lio_dev->instr_queue[i] == NULL)
- break;
-
- pending = rte_atomic64_read(
- &lio_dev->instr_queue[i]->instr_pending);
- if (pending)
- lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
-
- instr_cnt += pending;
- }
-
- if (instr_cnt == 0)
- break;
-
- rte_delay_ms(1);
-
- } while (retry-- && instr_cnt);
-
- return instr_cnt;
-}
-
-static inline void
-lio_ring_doorbell(struct lio_device *lio_dev,
- struct lio_instr_queue *iq)
-{
- if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
- rte_write32(iq->fill_cnt, iq->doorbell_reg);
- /* make sure doorbell write goes through */
- rte_wmb();
- iq->fill_cnt = 0;
- }
-}
-
-static inline void
-copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
-{
- uint8_t *iqptr, cmdsize;
-
- cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
- iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
-
- rte_memcpy(iqptr, cmd, cmdsize);
-}
-
-static inline struct lio_iq_post_status
-post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
-{
- struct lio_iq_post_status st;
-
- st.status = LIO_IQ_SEND_OK;
-
- /* This ensures that the read index does not wrap around to the same
- * position if queue gets full before Octeon could fetch any instr.
- */
- if (rte_atomic64_read(&iq->instr_pending) >=
- (int32_t)(iq->nb_desc - 1)) {
- st.status = LIO_IQ_SEND_FAILED;
- st.index = -1;
- return st;
- }
-
- if (rte_atomic64_read(&iq->instr_pending) >=
- (int32_t)(iq->nb_desc - 2))
- st.status = LIO_IQ_SEND_STOP;
-
- copy_cmd_into_iq(iq, cmd);
-
- /* "index" is returned, host_write_index is modified. */
- st.index = iq->host_write_index;
- iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
- iq->nb_desc);
- iq->fill_cnt++;
-
- /* Flush the command into memory. We need to be sure the data is in
- * memory before indicating that the instruction is pending.
- */
- rte_wmb();
-
- rte_atomic64_inc(&iq->instr_pending);
-
- return st;
-}
-
-static inline void
-lio_add_to_request_list(struct lio_instr_queue *iq,
- int idx, void *buf, int reqtype)
-{
- iq->request_list[idx].buf = buf;
- iq->request_list[idx].reqtype = reqtype;
-}
-
-static inline void
-lio_free_netsgbuf(void *buf)
-{
- struct lio_buf_free_info *finfo = buf;
- struct lio_device *lio_dev = finfo->lio_dev;
- struct rte_mbuf *m = finfo->mbuf;
- struct lio_gather *g = finfo->g;
- uint8_t iq = finfo->iq_no;
-
- /* This will take care of multiple segments also */
- rte_pktmbuf_free(m);
-
- rte_spinlock_lock(&lio_dev->glist_lock[iq]);
- STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
- rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
- rte_free(finfo);
-}
-
-/* Can only run in process context */
-static int
-lio_process_iq_request_list(struct lio_device *lio_dev,
- struct lio_instr_queue *iq)
-{
- struct octeon_instr_irh *irh = NULL;
- uint32_t old = iq->flush_index;
- struct lio_soft_command *sc;
- uint32_t inst_count = 0;
- int reqtype;
- void *buf;
-
- while (old != iq->lio_read_index) {
- reqtype = iq->request_list[old].reqtype;
- buf = iq->request_list[old].buf;
-
- if (reqtype == LIO_REQTYPE_NONE)
- goto skip_this;
-
- switch (reqtype) {
- case LIO_REQTYPE_NORESP_NET:
- rte_pktmbuf_free((struct rte_mbuf *)buf);
- break;
- case LIO_REQTYPE_NORESP_NET_SG:
- lio_free_netsgbuf(buf);
- break;
- case LIO_REQTYPE_SOFT_COMMAND:
- sc = buf;
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- if (irh->rflag) {
- /* We're expecting a response from Octeon.
- * It's up to lio_process_ordered_list() to
- * process sc. Add sc to the ordered soft
- * command response list because we expect
- * a response from Octeon.
- */
- rte_spinlock_lock(&lio_dev->response_list.lock);
- rte_atomic64_inc(
- &lio_dev->response_list.pending_req_count);
- STAILQ_INSERT_TAIL(
- &lio_dev->response_list.head,
- &sc->node, entries);
- rte_spinlock_unlock(
- &lio_dev->response_list.lock);
- } else {
- if (sc->callback) {
- /* This callback must not sleep */
- sc->callback(LIO_REQUEST_DONE,
- sc->callback_arg);
- }
- }
- break;
- default:
- lio_dev_err(lio_dev,
- "Unknown reqtype: %d buf: %p at idx %d\n",
- reqtype, buf, old);
- }
-
- iq->request_list[old].buf = NULL;
- iq->request_list[old].reqtype = 0;
-
-skip_this:
- inst_count++;
- old = lio_incr_index(old, 1, iq->nb_desc);
- }
-
- iq->flush_index = old;
-
- return inst_count;
-}
-
-static void
-lio_update_read_index(struct lio_instr_queue *iq)
-{
- uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
- uint32_t last_done;
-
- last_done = pkt_in_done - iq->pkt_in_done;
- iq->pkt_in_done = pkt_in_done;
-
- /* Add last_done and modulo with the IQ size to get new index */
- iq->lio_read_index = (iq->lio_read_index +
- (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
- iq->nb_desc;
-}
-
-int
-lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
-{
- uint32_t inst_processed = 0;
- int tx_done = 1;
-
- if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
- return tx_done;
-
- rte_spinlock_lock(&iq->lock);
-
- lio_update_read_index(iq);
-
- do {
- /* Process any outstanding IQ packets. */
- if (iq->flush_index == iq->lio_read_index)
- break;
-
- inst_processed = lio_process_iq_request_list(lio_dev, iq);
-
- if (inst_processed) {
- rte_atomic64_sub(&iq->instr_pending, inst_processed);
- iq->stats.instr_processed += inst_processed;
- }
-
- inst_processed = 0;
-
- } while (1);
-
- rte_spinlock_unlock(&iq->lock);
-
- rte_atomic64_clear(&iq->iq_flush_running);
-
- return tx_done;
-}
-
-static int
-lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
- void *buf, uint32_t datasize, uint32_t reqtype)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- struct lio_iq_post_status st;
-
- rte_spinlock_lock(&iq->post_lock);
-
- st = post_command2(iq, cmd);
-
- if (st.status != LIO_IQ_SEND_FAILED) {
- lio_add_to_request_list(iq, st.index, buf, reqtype);
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
- datasize);
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
-
- lio_ring_doorbell(lio_dev, iq);
- } else {
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
- }
-
- rte_spinlock_unlock(&iq->post_lock);
-
- return st.status;
-}
-
-void
-lio_prepare_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc, uint8_t opcode,
- uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
- uint64_t ossp1)
-{
- struct octeon_instr_pki_ih3 *pki_ih3;
- struct octeon_instr_ih3 *ih3;
- struct octeon_instr_irh *irh;
- struct octeon_instr_rdp *rdp;
-
- RTE_ASSERT(opcode <= 15);
- RTE_ASSERT(subcode <= 127);
-
- ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
-
- ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
-
- pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
-
- pki_ih3->w = 1;
- pki_ih3->raw = 1;
- pki_ih3->utag = 1;
- pki_ih3->uqpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
- pki_ih3->utt = 1;
-
- pki_ih3->tag = LIO_CONTROL;
- pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
- pki_ih3->qpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
- pki_ih3->pm = 0x7;
- pki_ih3->sl = 8;
-
- if (sc->datasize)
- ih3->dlengsz = sc->datasize;
-
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- irh->opcode = opcode;
- irh->subcode = subcode;
-
- /* opcode/subcode specific parameters (ossp) */
- irh->ossp = irh_ossp;
- sc->cmd.cmd3.ossp[0] = ossp0;
- sc->cmd.cmd3.ossp[1] = ossp1;
-
- if (sc->rdatasize) {
- rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
- rdp->pcie_port = lio_dev->pcie_port;
- rdp->rlen = sc->rdatasize;
- irh->rflag = 1;
- /* PKI IH3 */
- ih3->fsz = OCTEON_SOFT_CMD_RESP_IH3;
- } else {
- irh->rflag = 0;
- /* PKI IH3 */
- ih3->fsz = OCTEON_PCI_CMD_O3;
- }
-}
-
-int
-lio_send_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc)
-{
- struct octeon_instr_ih3 *ih3;
- struct octeon_instr_irh *irh;
- uint32_t len = 0;
-
- ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
- if (ih3->dlengsz) {
- RTE_ASSERT(sc->dmadptr);
- sc->cmd.cmd3.dptr = sc->dmadptr;
- }
-
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- if (irh->rflag) {
- RTE_ASSERT(sc->dmarptr);
- RTE_ASSERT(sc->status_word != NULL);
- *sc->status_word = LIO_COMPLETION_WORD_INIT;
- sc->cmd.cmd3.rptr = sc->dmarptr;
- }
-
- len = (uint32_t)ih3->dlengsz;
-
- if (sc->wait_time)
- sc->timeout = lio_uptime + sc->wait_time;
-
- return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
- LIO_REQTYPE_SOFT_COMMAND);
-}
-
-int
-lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
-{
- char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
- uint16_t buf_size;
-
- buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
- snprintf(sc_pool_name, sizeof(sc_pool_name),
- "lio_sc_pool_%u", lio_dev->port_id);
- lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
- LIO_MAX_SOFT_COMMAND_BUFFERS,
- 0, 0, buf_size, SOCKET_ID_ANY);
- return 0;
-}
-
-void
-lio_free_sc_buffer_pool(struct lio_device *lio_dev)
-{
- rte_mempool_free(lio_dev->sc_buf_pool);
-}
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
- uint32_t rdatasize, uint32_t ctxsize)
-{
- uint32_t offset = sizeof(struct lio_soft_command);
- struct lio_soft_command *sc;
- struct rte_mbuf *m;
- uint64_t dma_addr;
-
- RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
- LIO_SOFT_COMMAND_BUFFER_SIZE);
-
- m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
- if (m == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
- return NULL;
- }
-
- /* set rte_mbuf data size and there is only 1 segment */
- m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
- m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
-
- /* use rte_mbuf buffer for soft command */
- sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
- memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
- sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
- sc->dma_addr = rte_mbuf_data_iova(m);
- sc->mbuf = m;
-
- dma_addr = sc->dma_addr;
-
- if (ctxsize) {
- sc->ctxptr = (uint8_t *)sc + offset;
- sc->ctxsize = ctxsize;
- }
-
- /* Start data at 128 byte boundary */
- offset = (offset + ctxsize + 127) & 0xffffff80;
-
- if (datasize) {
- sc->virtdptr = (uint8_t *)sc + offset;
- sc->dmadptr = dma_addr + offset;
- sc->datasize = datasize;
- }
-
- /* Start rdata at 128 byte boundary */
- offset = (offset + datasize + 127) & 0xffffff80;
-
- if (rdatasize) {
- RTE_ASSERT(rdatasize >= 16);
- sc->virtrptr = (uint8_t *)sc + offset;
- sc->dmarptr = dma_addr + offset;
- sc->rdatasize = rdatasize;
- sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
- rdatasize - 8);
- }
-
- return sc;
-}
-
-void
-lio_free_soft_command(struct lio_soft_command *sc)
-{
- rte_pktmbuf_free(sc->mbuf);
-}
-
-void
-lio_setup_response_list(struct lio_device *lio_dev)
-{
- STAILQ_INIT(&lio_dev->response_list.head);
- rte_spinlock_init(&lio_dev->response_list.lock);
- rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
-}
-
-int
-lio_process_ordered_list(struct lio_device *lio_dev)
-{
- int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
- struct lio_response_list *ordered_sc_list;
- struct lio_soft_command *sc;
- int request_complete = 0;
- uint64_t status64;
- uint32_t status;
-
- ordered_sc_list = &lio_dev->response_list;
-
- do {
- rte_spinlock_lock(&ordered_sc_list->lock);
-
- if (STAILQ_EMPTY(&ordered_sc_list->head)) {
- /* ordered_sc_list is empty; there is
- * nothing to process
- */
- rte_spinlock_unlock(&ordered_sc_list->lock);
- return -1;
- }
-
- sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
- struct lio_soft_command, node);
-
- status = LIO_REQUEST_PENDING;
-
- /* check if octeon has finished DMA'ing a response
- * to where rptr is pointing to
- */
- status64 = *sc->status_word;
-
- if (status64 != LIO_COMPLETION_WORD_INIT) {
- /* This logic ensures that all 64b have been written.
- * 1. check byte 0 for non-FF
- * 2. if non-FF, then swap result from BE to host order
- * 3. check byte 7 (swapped to 0) for non-FF
- * 4. if non-FF, use the low 32-bit status code
- * 5. if either byte 0 or byte 7 is FF, don't use status
- */
- if ((status64 & 0xff) != 0xff) {
- lio_swap_8B_data(&status64, 1);
- if (((status64 & 0xff) != 0xff)) {
- /* retrieve 16-bit firmware status */
- status = (uint32_t)(status64 &
- 0xffffULL);
- if (status) {
- status =
- LIO_FIRMWARE_STATUS_CODE(
- status);
- } else {
- /* i.e. no error */
- status = LIO_REQUEST_DONE;
- }
- }
- }
- } else if ((sc->timeout && lio_check_timeout(lio_uptime,
- sc->timeout))) {
- lio_dev_err(lio_dev,
- "cmd failed, timeout (%ld, %ld)\n",
- (long)lio_uptime, (long)sc->timeout);
- status = LIO_REQUEST_TIMEOUT;
- }
-
- if (status != LIO_REQUEST_PENDING) {
- /* we have received a response or we have timed out.
- * remove node from linked list
- */
- STAILQ_REMOVE(&ordered_sc_list->head,
- &sc->node, lio_stailq_node, entries);
- rte_atomic64_dec(
- &lio_dev->response_list.pending_req_count);
- rte_spinlock_unlock(&ordered_sc_list->lock);
-
- if (sc->callback)
- sc->callback(status, sc->callback_arg);
-
- request_complete++;
- } else {
- /* no response yet */
- request_complete = 0;
- rte_spinlock_unlock(&ordered_sc_list->lock);
- }
-
- /* If we hit the Max Ordered requests to process every loop,
- * we quit and let this function be invoked the next time
- * the poll thread runs to process the remaining requests.
- * This function can take up the entire CPU if there is
- * no upper limit to the requests processed.
- */
- if (request_complete >= resp_to_process)
- break;
- } while (request_complete);
-
- return 0;
-}
-
-static inline struct lio_stailq_node *
-list_delete_first_node(struct lio_stailq_head *head)
-{
- struct lio_stailq_node *node;
-
- if (STAILQ_EMPTY(head))
- node = NULL;
- else
- node = STAILQ_FIRST(head);
-
- if (node)
- STAILQ_REMOVE(head, node, lio_stailq_node, entries);
-
- return node;
-}
-
-void
-lio_delete_sglist(struct lio_instr_queue *txq)
-{
- struct lio_device *lio_dev = txq->lio_dev;
- int iq_no = txq->q_index;
- struct lio_gather *g;
-
- if (lio_dev->glist_head == NULL)
- return;
-
- do {
- g = (struct lio_gather *)list_delete_first_node(
- &lio_dev->glist_head[iq_no]);
- if (g) {
- if (g->sg)
- rte_free(
- (void *)((unsigned long)g->sg - g->adjust));
- rte_free(g);
- }
- } while (g);
-}
-
-/**
- * \brief Setup gather lists
- * @param lio per-network private data
- */
-int
-lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
- int fw_mapped_iq, int num_descs, unsigned int socket_id)
-{
- struct lio_gather *g;
- int i;
-
- rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
-
- STAILQ_INIT(&lio_dev->glist_head[iq_no]);
-
- for (i = 0; i < num_descs; i++) {
- g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
- socket_id);
- if (g == NULL) {
- lio_dev_err(lio_dev,
- "lio_gather memory allocation failed for qno %d\n",
- iq_no);
- break;
- }
-
- g->sg_size =
- ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
-
- g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (g->sg == NULL) {
- lio_dev_err(lio_dev,
- "sg list memory allocation failed for qno %d\n",
- iq_no);
- rte_free(g);
- break;
- }
-
- /* The gather component should be aligned on 64-bit boundary */
- if (((unsigned long)g->sg) & 7) {
- g->adjust = 8 - (((unsigned long)g->sg) & 7);
- g->sg =
- (struct lio_sg_entry *)((unsigned long)g->sg +
- g->adjust);
- }
-
- STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
- entries);
- }
-
- if (i != num_descs) {
- lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
- return -ENOMEM;
- }
-
- return 0;
-}
-
-void
-lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
-{
- lio_delete_instr_queue(lio_dev, iq_no);
- rte_free(lio_dev->instr_queue[iq_no]);
- lio_dev->instr_queue[iq_no] = NULL;
- lio_dev->num_iqs--;
-}
-
-static inline uint32_t
-lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
-{
- return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
- (uint32_t)rte_atomic64_read(
- &lio_dev->instr_queue[q_no]->instr_pending));
-}
-
-static inline int
-lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
-{
- return ((uint32_t)rte_atomic64_read(
- &lio_dev->instr_queue[q_no]->instr_pending) >=
- (lio_dev->instr_queue[q_no]->nb_desc - 2));
-}
-
-static int
-lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- uint32_t count = 10000;
-
- while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
- --count)
- lio_flush_iq(lio_dev, iq);
-
- return count ? 0 : 1;
-}
-
-static void
-lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
-{
- struct lio_soft_command *sc = sc_ptr;
- struct lio_dev_ctrl_cmd *ctrl_cmd;
- struct lio_ctrl_pkt *ctrl_pkt;
-
- ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
- ctrl_cmd = ctrl_pkt->ctrl_cmd;
- ctrl_cmd->cond = 1;
-
- lio_free_soft_command(sc);
-}
-
-static inline struct lio_soft_command *
-lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
- struct lio_ctrl_pkt *ctrl_pkt)
-{
- struct lio_soft_command *sc = NULL;
- uint32_t uddsize, datasize;
- uint32_t rdatasize;
- uint8_t *data;
-
- uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
-
- datasize = OCTEON_CMD_SIZE + uddsize;
- rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
-
- sc = lio_alloc_soft_command(lio_dev, datasize,
- rdatasize, sizeof(struct lio_ctrl_pkt));
- if (sc == NULL)
- return NULL;
-
- rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
-
- data = (uint8_t *)sc->virtdptr;
-
- rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
-
- lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
-
- if (uddsize) {
- /* Endian-Swap for UDD should have been done by caller. */
- rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
- }
-
- sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
-
- lio_prepare_soft_command(lio_dev, sc,
- LIO_OPCODE, LIO_OPCODE_CMD,
- 0, 0, 0);
-
- sc->callback = lio_ctrl_cmd_callback;
- sc->callback_arg = sc;
- sc->wait_time = ctrl_pkt->wait_time;
-
- return sc;
-}
-
-int
-lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
-{
- struct lio_soft_command *sc = NULL;
- int retval;
-
- sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
- if (sc == NULL) {
- lio_dev_err(lio_dev, "soft command allocation failed\n");
- return -1;
- }
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_free_soft_command(sc);
- lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
- lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
- return -1;
- }
-
- return retval;
-}
-
-/** Send data packet to the device
- * @param lio_dev - lio device pointer
- * @param ndata - control structure with queueing, and buffer information
- *
- * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-static inline int
-lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
-{
- return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
- ndata->buf, ndata->datasize, ndata->reqtype);
-}
-
-uint16_t
-lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
-{
- struct lio_instr_queue *txq = tx_queue;
- union lio_cmd_setup cmdsetup;
- struct lio_device *lio_dev;
- struct lio_iq_stats *stats;
- struct lio_data_pkt ndata;
- int i, processed = 0;
- struct rte_mbuf *m;
- uint32_t tag = 0;
- int status = 0;
- int iq_no;
-
- lio_dev = txq->lio_dev;
- iq_no = txq->txpciq.s.q_no;
- stats = &lio_dev->instr_queue[iq_no]->stats;
-
- if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
- PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
- lio_dev->linfo.link.s.link_up);
- goto xmit_failed;
- }
-
- lio_dev_cleanup_iq(lio_dev, iq_no);
-
- for (i = 0; i < nb_pkts; i++) {
- uint32_t pkt_len = 0;
-
- m = pkts[i];
-
- /* Prepare the attributes for the data to be passed to BASE. */
- memset(&ndata, 0, sizeof(struct lio_data_pkt));
-
- ndata.buf = m;
-
- ndata.q_no = iq_no;
- if (lio_iq_is_full(lio_dev, ndata.q_no)) {
- stats->tx_iq_busy++;
- if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
- PMD_TX_LOG(lio_dev, ERR,
- "Transmit failed iq:%d full\n",
- ndata.q_no);
- break;
- }
- }
-
- cmdsetup.cmd_setup64 = 0;
- cmdsetup.s.iq_no = iq_no;
-
- /* check checksum offload flags to form cmd */
- if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
- cmdsetup.s.ip_csum = 1;
-
- if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
- cmdsetup.s.tnl_csum = 1;
- else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
- (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
- cmdsetup.s.transport_csum = 1;
-
- if (m->nb_segs == 1) {
- pkt_len = rte_pktmbuf_data_len(m);
- cmdsetup.s.u.datasize = pkt_len;
- lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
- &cmdsetup, tag);
- ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
- ndata.reqtype = LIO_REQTYPE_NORESP_NET;
- } else {
- struct lio_buf_free_info *finfo;
- struct lio_gather *g;
- rte_iova_t phyaddr;
- int i, frags;
-
- finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
- sizeof(*finfo), 0);
- if (finfo == NULL) {
- PMD_TX_LOG(lio_dev, ERR,
- "free buffer alloc failed\n");
- goto xmit_failed;
- }
-
- rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
- g = (struct lio_gather *)list_delete_first_node(
- &lio_dev->glist_head[iq_no]);
- rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
- if (g == NULL) {
- PMD_TX_LOG(lio_dev, ERR,
- "Transmit scatter gather: glist null!\n");
- goto xmit_failed;
- }
-
- cmdsetup.s.gather = 1;
- cmdsetup.s.u.gatherptrs = m->nb_segs;
- lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
- &cmdsetup, tag);
-
- memset(g->sg, 0, g->sg_size);
- g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
- lio_add_sg_size(&g->sg[0], m->data_len, 0);
- pkt_len = m->data_len;
- finfo->mbuf = m;
-
- /* First seg taken care above */
- frags = m->nb_segs - 1;
- i = 1;
- m = m->next;
- while (frags--) {
- g->sg[(i >> 2)].ptr[(i & 3)] =
- rte_mbuf_data_iova(m);
- lio_add_sg_size(&g->sg[(i >> 2)],
- m->data_len, (i & 3));
- pkt_len += m->data_len;
- i++;
- m = m->next;
- }
-
- phyaddr = rte_mem_virt2iova(g->sg);
- if (phyaddr == RTE_BAD_IOVA) {
- PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
- goto xmit_failed;
- }
-
- ndata.cmd.cmd3.dptr = phyaddr;
- ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
-
- finfo->g = g;
- finfo->lio_dev = lio_dev;
- finfo->iq_no = (uint64_t)iq_no;
- ndata.buf = finfo;
- }
-
- ndata.datasize = pkt_len;
-
- status = lio_send_data_pkt(lio_dev, &ndata);
-
- if (unlikely(status == LIO_IQ_SEND_FAILED)) {
- PMD_TX_LOG(lio_dev, ERR, "send failed\n");
- break;
- }
-
- if (unlikely(status == LIO_IQ_SEND_STOP)) {
- PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
- /* create space as iq is full */
- lio_dev_cleanup_iq(lio_dev, iq_no);
- }
-
- stats->tx_done++;
- stats->tx_tot_bytes += pkt_len;
- processed++;
- }
-
-xmit_failed:
- stats->tx_dropped += (nb_pkts - processed);
-
- return processed;
-}
-
-void
-lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
-{
- struct lio_instr_queue *txq;
- struct lio_droq *rxq;
- uint16_t i;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- if (txq != NULL) {
- lio_dev_tx_queue_release(eth_dev, i);
- eth_dev->data->tx_queues[i] = NULL;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- if (rxq != NULL) {
- lio_dev_rx_queue_release(eth_dev, i);
- eth_dev->data->rx_queues[i] = NULL;
- }
- }
-}
diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
deleted file mode 100644
index d2a45104f0..0000000000
--- a/drivers/net/liquidio/lio_rxtx.h
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_RXTX_H_
-#define _LIO_RXTX_H_
-
-#include <stdio.h>
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-#include <rte_memory.h>
-
-#include "lio_struct.h"
-
-#ifndef ROUNDUP4
-#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
-#endif
-
-#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem) \
- (type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
-
-#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
-
-#define lio_uptime \
- (size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
-
-/** Descriptor format.
- * The descriptor ring is made of descriptors which have 2 64-bit values:
- * -# Physical (bus) address of the data buffer.
- * -# Physical (bus) address of a lio_droq_info structure.
- * The device DMA's incoming packets and its information at the address
- * given by these descriptor fields.
- */
-struct lio_droq_desc {
- /** The buffer pointer */
- uint64_t buffer_ptr;
-
- /** The Info pointer */
- uint64_t info_ptr;
-};
-
-#define LIO_DROQ_DESC_SIZE (sizeof(struct lio_droq_desc))
-
-/** Information about packet DMA'ed by Octeon.
- * The format of the information available at Info Pointer after Octeon
- * has posted a packet. Not all descriptors have valid information. Only
- * the Info field of the first descriptor for a packet has information
- * about the packet.
- */
-struct lio_droq_info {
- /** The Output Receive Header. */
- union octeon_rh rh;
-
- /** The Length of the packet. */
- uint64_t length;
-};
-
-#define LIO_DROQ_INFO_SIZE (sizeof(struct lio_droq_info))
-
-/** Pointer to data buffer.
- * Driver keeps a pointer to the data buffer that it made available to
- * the Octeon device. Since the descriptor ring keeps physical (bus)
- * addresses, this field is required for the driver to keep track of
- * the virtual address pointers.
- */
-struct lio_recv_buffer {
- /** Packet buffer, including meta data. */
- void *buffer;
-
- /** Data in the packet buffer. */
- uint8_t *data;
-
-};
-
-#define LIO_DROQ_RECVBUF_SIZE (sizeof(struct lio_recv_buffer))
-
-#define LIO_DROQ_SIZE (sizeof(struct lio_droq))
-
-#define LIO_IQ_SEND_OK 0
-#define LIO_IQ_SEND_STOP 1
-#define LIO_IQ_SEND_FAILED -1
-
-/* conditions */
-#define LIO_REQTYPE_NONE 0
-#define LIO_REQTYPE_NORESP_NET 1
-#define LIO_REQTYPE_NORESP_NET_SG 2
-#define LIO_REQTYPE_SOFT_COMMAND 3
-
-struct lio_request_list {
- uint32_t reqtype;
- void *buf;
-};
-
-/*---------------------- INSTRUCTION FORMAT ----------------------------*/
-
-struct lio_instr3_64B {
- /** Pointer where the input data is available. */
- uint64_t dptr;
-
- /** Instruction Header. */
- uint64_t ih3;
-
- /** Instruction Header. */
- uint64_t pki_ih3;
-
- /** Input Request Header. */
- uint64_t irh;
-
- /** opcode/subcode specific parameters */
- uint64_t ossp[2];
-
- /** Return Data Parameters */
- uint64_t rdp;
-
- /** Pointer where the response for a RAW mode packet will be written
- * by Octeon.
- */
- uint64_t rptr;
-
-};
-
-union lio_instr_64B {
- struct lio_instr3_64B cmd3;
-};
-
-/** The size of each buffer in soft command buffer pool */
-#define LIO_SOFT_COMMAND_BUFFER_SIZE 1536
-
-/** Maximum number of buffers to allocate into soft command buffer pool */
-#define LIO_MAX_SOFT_COMMAND_BUFFERS 255
-
-struct lio_soft_command {
- /** Soft command buffer info. */
- struct lio_stailq_node node;
- uint64_t dma_addr;
- uint32_t size;
-
- /** Command and return status */
- union lio_instr_64B cmd;
-
-#define LIO_COMPLETION_WORD_INIT 0xffffffffffffffffULL
- uint64_t *status_word;
-
- /** Data buffer info */
- void *virtdptr;
- uint64_t dmadptr;
- uint32_t datasize;
-
- /** Return buffer info */
- void *virtrptr;
- uint64_t dmarptr;
- uint32_t rdatasize;
-
- /** Context buffer info */
- void *ctxptr;
- uint32_t ctxsize;
-
- /** Time out and callback */
- size_t wait_time;
- size_t timeout;
- uint32_t iq_no;
- void (*callback)(uint32_t, void *);
- void *callback_arg;
- struct rte_mbuf *mbuf;
-};
-
-struct lio_iq_post_status {
- int status;
- int index;
-};
-
-/* wqe
- * --------------- 0
- * | wqe word0-3 |
- * --------------- 32
- * | PCI IH |
- * --------------- 40
- * | RPTR |
- * --------------- 48
- * | PCI IRH |
- * --------------- 56
- * | OCTEON_CMD |
- * --------------- 64
- * | Addtl 8-BData |
- * | |
- * ---------------
- */
-
-union octeon_cmd {
- uint64_t cmd64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t cmd : 5;
-
- uint64_t more : 6; /* How many udd words follow the command */
-
- uint64_t reserved : 29;
-
- uint64_t param1 : 16;
-
- uint64_t param2 : 8;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
- uint64_t param2 : 8;
-
- uint64_t param1 : 16;
-
- uint64_t reserved : 29;
-
- uint64_t more : 6;
-
- uint64_t cmd : 5;
-
-#endif
- } s;
-};
-
-#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
-
-/* Maximum number of 8-byte words can be
- * sent in a NIC control message.
- */
-#define LIO_MAX_NCTRL_UDD 32
-
-/* Structure of control information passed by driver to the BASE
- * layer when sending control commands to Octeon device software.
- */
-struct lio_ctrl_pkt {
- /** Command to be passed to the Octeon device software. */
- union octeon_cmd ncmd;
-
- /** Send buffer */
- void *data;
- uint64_t dmadata;
-
- /** Response buffer */
- void *rdata;
- uint64_t dmardata;
-
- /** Additional data that may be needed by some commands. */
- uint64_t udd[LIO_MAX_NCTRL_UDD];
-
- /** Input queue to use to send this command. */
- uint64_t iq_no;
-
- /** Time to wait for Octeon software to respond to this control command.
- * If wait_time is 0, BASE assumes no response is expected.
- */
- size_t wait_time;
-
- struct lio_dev_ctrl_cmd *ctrl_cmd;
-};
-
-/** Structure of data information passed by driver to the BASE
- * layer when forwarding data to Octeon device software.
- */
-struct lio_data_pkt {
- /** Pointer to information maintained by NIC module for this packet. The
- * BASE layer passes this as-is to the driver.
- */
- void *buf;
-
- /** Type of buffer passed in "buf" above. */
- uint32_t reqtype;
-
- /** Total data bytes to be transferred in this command. */
- uint32_t datasize;
-
- /** Command to be passed to the Octeon device software. */
- union lio_instr_64B cmd;
-
- /** Input queue to use to send this command. */
- uint32_t q_no;
-};
-
-/** Structure passed by driver to BASE layer to prepare a command to send
- * network data to Octeon.
- */
-union lio_cmd_setup {
- struct {
- uint32_t iq_no : 8;
- uint32_t gather : 1;
- uint32_t timestamp : 1;
- uint32_t ip_csum : 1;
- uint32_t transport_csum : 1;
- uint32_t tnl_csum : 1;
- uint32_t rsvd : 19;
-
- union {
- uint32_t datasize;
- uint32_t gatherptrs;
- } u;
- } s;
-
- uint64_t cmd_setup64;
-};
-
-/* Instruction Header */
-struct octeon_instr_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** Reserved3 */
- uint64_t reserved3 : 1;
-
- /** Gather indicator 1=gather*/
- uint64_t gather : 1;
-
- /** Data length OR no. of entries in gather list */
- uint64_t dlengsz : 14;
-
- /** Front Data size */
- uint64_t fsz : 6;
-
- /** Reserved2 */
- uint64_t reserved2 : 4;
-
- /** PKI port kind - PKIND */
- uint64_t pkind : 6;
-
- /** Reserved1 */
- uint64_t reserved1 : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- /** Reserved1 */
- uint64_t reserved1 : 32;
-
- /** PKI port kind - PKIND */
- uint64_t pkind : 6;
-
- /** Reserved2 */
- uint64_t reserved2 : 4;
-
- /** Front Data size */
- uint64_t fsz : 6;
-
- /** Data length OR no. of entries in gather list */
- uint64_t dlengsz : 14;
-
- /** Gather indicator 1=gather*/
- uint64_t gather : 1;
-
- /** Reserved3 */
- uint64_t reserved3 : 1;
-
-#endif
-};
-
-/* PKI Instruction Header(PKI IH) */
-struct octeon_instr_pki_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** Wider bit */
- uint64_t w : 1;
-
- /** Raw mode indicator 1 = RAW */
- uint64_t raw : 1;
-
- /** Use Tag */
- uint64_t utag : 1;
-
- /** Use QPG */
- uint64_t uqpg : 1;
-
- /** Reserved2 */
- uint64_t reserved2 : 1;
-
- /** Parse Mode */
- uint64_t pm : 3;
-
- /** Skip Length */
- uint64_t sl : 8;
-
- /** Use Tag Type */
- uint64_t utt : 1;
-
- /** Tag type */
- uint64_t tagtype : 2;
-
- /** Reserved1 */
- uint64_t reserved1 : 2;
-
- /** QPG Value */
- uint64_t qpg : 11;
-
- /** Tag Value */
- uint64_t tag : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
- /** Tag Value */
- uint64_t tag : 32;
-
- /** QPG Value */
- uint64_t qpg : 11;
-
- /** Reserved1 */
- uint64_t reserved1 : 2;
-
- /** Tag type */
- uint64_t tagtype : 2;
-
- /** Use Tag Type */
- uint64_t utt : 1;
-
- /** Skip Length */
- uint64_t sl : 8;
-
- /** Parse Mode */
- uint64_t pm : 3;
-
- /** Reserved2 */
- uint64_t reserved2 : 1;
-
- /** Use QPG */
- uint64_t uqpg : 1;
-
- /** Use Tag */
- uint64_t utag : 1;
-
- /** Raw mode indicator 1 = RAW */
- uint64_t raw : 1;
-
- /** Wider bit */
- uint64_t w : 1;
-#endif
-};
-
-/** Input Request Header */
-struct octeon_instr_irh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t opcode : 4;
- uint64_t rflag : 1;
- uint64_t subcode : 7;
- uint64_t vlan : 12;
- uint64_t priority : 3;
- uint64_t reserved : 5;
- uint64_t ossp : 32; /* opcode/subcode specific parameters */
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t ossp : 32; /* opcode/subcode specific parameters */
- uint64_t reserved : 5;
- uint64_t priority : 3;
- uint64_t vlan : 12;
- uint64_t subcode : 7;
- uint64_t rflag : 1;
- uint64_t opcode : 4;
-#endif
-};
-
-/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
-#define OCTEON_SOFT_CMD_RESP_IH3 (40 + 8)
-/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
-#define OCTEON_PCI_CMD_O3 (24 + 8)
-
-/** Return Data Parameters */
-struct octeon_instr_rdp {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t reserved : 49;
- uint64_t pcie_port : 3;
- uint64_t rlen : 12;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t rlen : 12;
- uint64_t pcie_port : 3;
- uint64_t reserved : 49;
-#endif
-};
-
-union octeon_packet_params {
- uint32_t pkt_params32;
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint32_t reserved : 24;
- uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
- /* Perform Outer transport header checksum */
- uint32_t transport_csum : 1;
- /* Find tunnel, and perform transport csum. */
- uint32_t tnl_csum : 1;
- uint32_t tsflag : 1; /* Timestamp this packet */
- uint32_t ipsec_ops : 4; /* IPsec operation */
-#else
- uint32_t ipsec_ops : 4;
- uint32_t tsflag : 1;
- uint32_t tnl_csum : 1;
- uint32_t transport_csum : 1;
- uint32_t ip_csum : 1;
- uint32_t reserved : 7;
-#endif
- } s;
-};
-
-/** Utility function to prepare a 64B NIC instruction based on a setup command
- * @param cmd - pointer to instruction to be filled in.
- * @param setup - pointer to the setup structure
- * @param q_no - which queue for back pressure
- *
- * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
- */
-static inline void
-lio_prepare_pci_cmd(struct lio_device *lio_dev,
- union lio_instr_64B *cmd,
- union lio_cmd_setup *setup,
- uint32_t tag)
-{
- union octeon_packet_params packet_params;
- struct octeon_instr_pki_ih3 *pki_ih3;
- struct octeon_instr_irh *irh;
- struct octeon_instr_ih3 *ih3;
- int port;
-
- memset(cmd, 0, sizeof(union lio_instr_64B));
-
- ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
- pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
-
- /* assume that rflag is cleared so therefore front data will only have
- * irh and ossp[1] and ossp[2] for a total of 24 bytes
- */
- ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
- /* PKI IH */
- ih3->fsz = OCTEON_PCI_CMD_O3;
-
- if (!setup->s.gather) {
- ih3->dlengsz = setup->s.u.datasize;
- } else {
- ih3->gather = 1;
- ih3->dlengsz = setup->s.u.gatherptrs;
- }
-
- pki_ih3->w = 1;
- pki_ih3->raw = 0;
- pki_ih3->utag = 0;
- pki_ih3->utt = 1;
- pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
-
- port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
-
- if (tag)
- pki_ih3->tag = tag;
- else
- pki_ih3->tag = LIO_DATA(port);
-
- pki_ih3->tagtype = OCTEON_ORDERED_TAG;
- pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
- pki_ih3->pm = 0x0; /* parse from L2 */
- pki_ih3->sl = 32; /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
-
- irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
-
- irh->opcode = LIO_OPCODE;
- irh->subcode = LIO_OPCODE_NW_DATA;
-
- packet_params.pkt_params32 = 0;
- packet_params.s.ip_csum = setup->s.ip_csum;
- packet_params.s.transport_csum = setup->s.transport_csum;
- packet_params.s.tnl_csum = setup->s.tnl_csum;
- packet_params.s.tsflag = setup->s.timestamp;
-
- irh->ossp = packet_params.pkt_params32;
-}
-
-int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
-void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev,
- uint32_t datasize, uint32_t rdatasize,
- uint32_t ctxsize);
-void lio_prepare_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc,
- uint8_t opcode, uint8_t subcode,
- uint32_t irh_ossp, uint64_t ossp0,
- uint64_t ossp1);
-int lio_send_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc);
-void lio_free_soft_command(struct lio_soft_command *sc);
-
-/** Send control packet to the device
- * @param lio_dev - lio device pointer
- * @param nctrl - control structure with command, timeout, and callback info
- *
- * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-int lio_send_ctrl_pkt(struct lio_device *lio_dev,
- struct lio_ctrl_pkt *ctrl_pkt);
-
-/** Maximum ordered requests to process in every invocation of
- * lio_process_ordered_list(). The function will continue to process requests
- * as long as it can find one that has finished processing. If it keeps
- * finding requests that have completed, the function can run for ever. The
- * value defined here sets an upper limit on the number of requests it can
- * process before it returns control to the poll thread.
- */
-#define LIO_MAX_ORD_REQS_TO_PROCESS 4096
-
-/** Error codes used in Octeon Host-Core communication.
- *
- * 31 16 15 0
- * ----------------------------
- * | | |
- * ----------------------------
- * Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
- * are reserved to identify the group to which the error code belongs. The
- * lower 16-bits, called Minor Error Number, carry the actual code.
- *
- * So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
- */
-/** Status for a request.
- * If the request is successfully queued, the driver will return
- * a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
- * the driver if the response for request failed to arrive before a
- * time-out period or if the request processing * got interrupted due to
- * a signal respectively.
- */
-enum {
- /** A value of 0x00000000 indicates no error i.e. success */
- LIO_REQUEST_DONE = 0x00000000,
- /** (Major number: 0x0000; Minor Number: 0x0001) */
- LIO_REQUEST_PENDING = 0x00000001,
- LIO_REQUEST_TIMEOUT = 0x00000003,
-
-};
-
-/*------ Error codes used by firmware (bits 15..0 set by firmware */
-#define LIO_FIRMWARE_MAJOR_ERROR_CODE 0x0001
-#define LIO_FIRMWARE_STATUS_CODE(status) \
- ((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
-
-/** Initialize the response lists. The number of response lists to create is
- * given by count.
- * @param lio_dev - the lio device structure.
- */
-void lio_setup_response_list(struct lio_device *lio_dev);
-
-/** Check the status of first entry in the ordered list. If the instruction at
- * that entry finished processing or has timed-out, the entry is cleaned.
- * @param lio_dev - the lio device structure.
- * @return 1 if the ordered list is empty, 0 otherwise.
- */
-int lio_process_ordered_list(struct lio_device *lio_dev);
-
-#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count) \
- (((lio_dev)->instr_queue[iq_no]->stats.field) += count)
-
-static inline void
-lio_swap_8B_data(uint64_t *data, uint32_t blocks)
-{
- while (blocks) {
- *data = rte_cpu_to_be_64(*data);
- blocks--;
- data++;
- }
-}
-
-static inline uint64_t
-lio_map_ring(void *buf)
-{
- rte_iova_t dma_addr;
-
- dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
-
- return (uint64_t)dma_addr;
-}
-
-static inline uint64_t
-lio_map_ring_info(struct lio_droq *droq, uint32_t i)
-{
- rte_iova_t dma_addr;
-
- dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
-
- return (uint64_t)dma_addr;
-}
-
-static inline int
-lio_opcode_slow_path(union octeon_rh *rh)
-{
- uint16_t subcode1, subcode2;
-
- subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
- subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
-
- return subcode2 != subcode1;
-}
-
-static inline void
-lio_add_sg_size(struct lio_sg_entry *sg_entry,
- uint16_t size, uint32_t pos)
-{
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- sg_entry->u.size[pos] = size;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- sg_entry->u.size[3 - pos] = size;
-#endif
-}
-
-/* Macro to increment index.
- * Index is incremented by count; if the sum exceeds
- * max, index is wrapped-around to the start.
- */
-static inline uint32_t
-lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
-{
- if ((index + count) >= max)
- index = index + count - max;
- else
- index += count;
-
- return index;
-}
-
-int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
- int desc_size, struct rte_mempool *mpool,
- unsigned int socket_id);
-uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t budget);
-void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
-
-void lio_delete_sglist(struct lio_instr_queue *txq);
-int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
- int fw_mapped_iq, int num_descs, unsigned int socket_id);
-uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
- uint16_t nb_pkts);
-int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
-int lio_setup_iq(struct lio_device *lio_dev, int q_index,
- union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
- unsigned int socket_id);
-int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
-void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
-/** Setup instruction queue zero for the device
- * @param lio_dev which lio device to setup
- *
- * @return 0 if success. -1 if fails
- */
-int lio_setup_instr_queue0(struct lio_device *lio_dev);
-void lio_free_instr_queue0(struct lio_device *lio_dev);
-void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
-#endif /* _LIO_RXTX_H_ */
diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
deleted file mode 100644
index 10270c560e..0000000000
--- a/drivers/net/liquidio/lio_struct.h
+++ /dev/null
@@ -1,661 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_STRUCT_H_
-#define _LIO_STRUCT_H_
-
-#include <stdio.h>
-#include <stdint.h>
-#include <sys/queue.h>
-
-#include <rte_spinlock.h>
-#include <rte_atomic.h>
-
-#include "lio_hw_defs.h"
-
-struct lio_stailq_node {
- STAILQ_ENTRY(lio_stailq_node) entries;
-};
-
-STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
-
-struct lio_version {
- uint16_t major;
- uint16_t minor;
- uint16_t micro;
- uint16_t reserved;
-};
-
-/** Input Queue statistics. Each input queue has four stats fields. */
-struct lio_iq_stats {
- uint64_t instr_posted; /**< Instructions posted to this queue. */
- uint64_t instr_processed; /**< Instructions processed in this queue. */
- uint64_t instr_dropped; /**< Instructions that could not be processed */
- uint64_t bytes_sent; /**< Bytes sent through this queue. */
- uint64_t tx_done; /**< Num of packets sent to network. */
- uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
- uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
- uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
-};
-
-/** Output Queue statistics. Each output queue has four stats fields. */
-struct lio_droq_stats {
- /** Number of packets received in this queue. */
- uint64_t pkts_received;
-
- /** Bytes received by this queue. */
- uint64_t bytes_received;
-
- /** Packets dropped due to no memory available. */
- uint64_t dropped_nomem;
-
- /** Packets dropped due to large number of pkts to process. */
- uint64_t dropped_toomany;
-
- /** Number of packets sent to stack from this queue. */
- uint64_t rx_pkts_received;
-
- /** Number of Bytes sent to stack from this queue. */
- uint64_t rx_bytes_received;
-
- /** Num of Packets dropped due to receive path failures. */
- uint64_t rx_dropped;
-
- /** Num of vxlan packets received; */
- uint64_t rx_vxlan;
-
- /** Num of failures of rte_pktmbuf_alloc() */
- uint64_t rx_alloc_failure;
-
-};
-
-/** The Descriptor Ring Output Queue structure.
- * This structure has all the information required to implement a
- * DROQ.
- */
-struct lio_droq {
- /** A spinlock to protect access to this ring. */
- rte_spinlock_t lock;
-
- uint32_t q_no;
-
- uint32_t pkt_count;
-
- struct lio_device *lio_dev;
-
- /** The 8B aligned descriptor ring starts at this address. */
- struct lio_droq_desc *desc_ring;
-
- /** Index in the ring where the driver should read the next packet */
- uint32_t read_idx;
-
- /** Index in the ring where Octeon will write the next packet */
- uint32_t write_idx;
-
- /** Index in the ring where the driver will refill the descriptor's
- * buffer
- */
- uint32_t refill_idx;
-
- /** Packets pending to be processed */
- rte_atomic64_t pkts_pending;
-
- /** Number of descriptors in this ring. */
- uint32_t nb_desc;
-
- /** The number of descriptors pending refill. */
- uint32_t refill_count;
-
- uint32_t refill_threshold;
-
- /** The 8B aligned info ptrs begin from this address. */
- struct lio_droq_info *info_list;
-
- /** The receive buffer list. This list has the virtual addresses of the
- * buffers.
- */
- struct lio_recv_buffer *recv_buf_list;
-
- /** The size of each buffer pointed by the buffer pointer. */
- uint32_t buffer_size;
-
- /** Pointer to the mapped packet credit register.
- * Host writes number of info/buffer ptrs available to this register
- */
- void *pkts_credit_reg;
-
- /** Pointer to the mapped packet sent register.
- * Octeon writes the number of packets DMA'ed to host memory
- * in this register.
- */
- void *pkts_sent_reg;
-
- /** Statistics for this DROQ. */
- struct lio_droq_stats stats;
-
- /** DMA mapped address of the DROQ descriptor ring. */
- size_t desc_ring_dma;
-
- /** Info ptr list are allocated at this virtual address. */
- size_t info_base_addr;
-
- /** DMA mapped address of the info list */
- size_t info_list_dma;
-
- /** Allocated size of info list. */
- uint32_t info_alloc_size;
-
- /** Memory zone **/
- const struct rte_memzone *desc_ring_mz;
- const struct rte_memzone *info_mz;
- struct rte_mempool *mpool;
-};
-
-/** Receive Header */
-union octeon_rh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t rh64;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t reserved : 17;
- uint64_t ossp : 32; /** opcode/subcode specific parameters */
- } r;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t extra : 28;
- uint64_t vlan : 12;
- uint64_t priority : 3;
- uint64_t csum_verified : 3; /** checksum verified. */
- uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
- uint64_t encap_on : 1;
- uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
- } r_dh;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t reserved : 8;
- uint64_t extra : 25;
- uint64_t gmxport : 16;
- } r_nic_info;
-#else
- uint64_t rh64;
- struct {
- uint64_t ossp : 32; /** opcode/subcode specific parameters */
- uint64_t reserved : 17;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r;
- struct {
- uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
- uint64_t encap_on : 1;
- uint64_t has_hwtstamp : 1; /** 1 = has hwtstamp */
- uint64_t csum_verified : 3; /** checksum verified. */
- uint64_t priority : 3;
- uint64_t vlan : 12;
- uint64_t extra : 28;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r_dh;
- struct {
- uint64_t gmxport : 16;
- uint64_t extra : 25;
- uint64_t reserved : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r_nic_info;
-#endif
-};
-
-#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
-
-/** The txpciq info passed to host from the firmware */
-union octeon_txpciq {
- uint64_t txpciq64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t q_no : 8;
- uint64_t port : 8;
- uint64_t pkind : 6;
- uint64_t use_qpg : 1;
- uint64_t qpg : 11;
- uint64_t aura_num : 10;
- uint64_t reserved : 20;
-#else
- uint64_t reserved : 20;
- uint64_t aura_num : 10;
- uint64_t qpg : 11;
- uint64_t use_qpg : 1;
- uint64_t pkind : 6;
- uint64_t port : 8;
- uint64_t q_no : 8;
-#endif
- } s;
-};
-
-/** The instruction (input) queue.
- * The input queue is used to post raw (instruction) mode data or packet
- * data to Octeon device from the host. Each input queue for
- * a LIO device has one such structure to represent it.
- */
-struct lio_instr_queue {
- /** A spinlock to protect access to the input ring. */
- rte_spinlock_t lock;
-
- rte_spinlock_t post_lock;
-
- struct lio_device *lio_dev;
-
- uint32_t pkt_in_done;
-
- rte_atomic64_t iq_flush_running;
-
- /** Flag that indicates if the queue uses 64 byte commands. */
- uint32_t iqcmd_64B:1;
-
- /** Queue info. */
- union octeon_txpciq txpciq;
-
- uint32_t rsvd:17;
-
- uint32_t status:8;
-
- /** Number of descriptors in this ring. */
- uint32_t nb_desc;
-
- /** Index in input ring where the driver should write the next packet */
- uint32_t host_write_index;
-
- /** Index in input ring where Octeon is expected to read the next
- * packet.
- */
- uint32_t lio_read_index;
-
- /** This index aids in finding the window in the queue where Octeon
- * has read the commands.
- */
- uint32_t flush_index;
-
- /** This field keeps track of the instructions pending in this queue. */
- rte_atomic64_t instr_pending;
-
- /** Pointer to the Virtual Base addr of the input ring. */
- uint8_t *base_addr;
-
- struct lio_request_list *request_list;
-
- /** Octeon doorbell register for the ring. */
- void *doorbell_reg;
-
- /** Octeon instruction count register for this ring. */
- void *inst_cnt_reg;
-
- /** Number of instructions pending to be posted to Octeon. */
- uint32_t fill_cnt;
-
- /** Statistics for this input queue. */
- struct lio_iq_stats stats;
-
- /** DMA mapped base address of the input descriptor ring. */
- uint64_t base_addr_dma;
-
- /** Application context */
- void *app_ctx;
-
- /* network stack queue index */
- int q_index;
-
- /* Memory zone */
- const struct rte_memzone *iq_mz;
-};
-
-/** This structure is used by driver to store information required
- * to free the mbuff when the packet has been fetched by Octeon.
- * Bytes offset below assume worst-case of a 64-bit system.
- */
-struct lio_buf_free_info {
- /** Bytes 1-8. Pointer to network device private structure. */
- struct lio_device *lio_dev;
-
- /** Bytes 9-16. Pointer to mbuff. */
- struct rte_mbuf *mbuf;
-
- /** Bytes 17-24. Pointer to gather list. */
- struct lio_gather *g;
-
- /** Bytes 25-32. Physical address of mbuf->data or gather list. */
- uint64_t dptr;
-
- /** Bytes 33-47. Piggybacked soft command, if any */
- struct lio_soft_command *sc;
-
- /** Bytes 48-63. iq no */
- uint64_t iq_no;
-};
-
-/* The Scatter-Gather List Entry. The scatter or gather component used with
- * input instruction has this format.
- */
-struct lio_sg_entry {
- /** The first 64 bit gives the size of data in each dptr. */
- union {
- uint16_t size[4];
- uint64_t size64;
- } u;
-
- /** The 4 dptr pointers for this entry. */
- uint64_t ptr[4];
-};
-
-#define LIO_SG_ENTRY_SIZE (sizeof(struct lio_sg_entry))
-
-/** Structure of a node in list of gather components maintained by
- * driver for each network device.
- */
-struct lio_gather {
- /** List manipulation. Next and prev pointers. */
- struct lio_stailq_node list;
-
- /** Size of the gather component at sg in bytes. */
- int sg_size;
-
- /** Number of bytes that sg was adjusted to make it 8B-aligned. */
- int adjust;
-
- /** Gather component that can accommodate max sized fragment list
- * received from the IP layer.
- */
- struct lio_sg_entry *sg;
-};
-
-struct lio_rss_ctx {
- uint16_t hash_key_size;
- uint8_t hash_key[LIO_RSS_MAX_KEY_SZ];
- /* Ideally a factor of number of queues */
- uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
- uint8_t itable_size;
- uint8_t ip;
- uint8_t tcp_hash;
- uint8_t ipv6;
- uint8_t ipv6_tcp_hash;
- uint8_t ipv6_ex;
- uint8_t ipv6_tcp_ex_hash;
- uint8_t hash_disable;
-};
-
-struct lio_io_enable {
- uint64_t iq;
- uint64_t oq;
- uint64_t iq64B;
-};
-
-struct lio_fn_list {
- void (*setup_iq_regs)(struct lio_device *, uint32_t);
- void (*setup_oq_regs)(struct lio_device *, uint32_t);
-
- int (*setup_mbox)(struct lio_device *);
- void (*free_mbox)(struct lio_device *);
-
- int (*setup_device_regs)(struct lio_device *);
- int (*enable_io_queues)(struct lio_device *);
- void (*disable_io_queues)(struct lio_device *);
-};
-
-struct lio_pf_vf_hs_word {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- /** PKIND value assigned for the DPI interface */
- uint64_t pkind : 8;
-
- /** OCTEON core clock multiplier */
- uint64_t core_tics_per_us : 16;
-
- /** OCTEON coprocessor clock multiplier */
- uint64_t coproc_tics_per_us : 16;
-
- /** app that currently running on OCTEON */
- uint64_t app_mode : 8;
-
- /** RESERVED */
- uint64_t reserved : 16;
-
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** RESERVED */
- uint64_t reserved : 16;
-
- /** app that currently running on OCTEON */
- uint64_t app_mode : 8;
-
- /** OCTEON coprocessor clock multiplier */
- uint64_t coproc_tics_per_us : 16;
-
- /** OCTEON core clock multiplier */
- uint64_t core_tics_per_us : 16;
-
- /** PKIND value assigned for the DPI interface */
- uint64_t pkind : 8;
-#endif
-};
-
-struct lio_sriov_info {
- /** Number of rings assigned to VF */
- uint32_t rings_per_vf;
-
- /** Number of VF devices enabled */
- uint32_t num_vfs;
-};
-
-/* Head of a response list */
-struct lio_response_list {
- /** List structure to add delete pending entries to */
- struct lio_stailq_head head;
-
- /** A lock for this response list */
- rte_spinlock_t lock;
-
- rte_atomic64_t pending_req_count;
-};
-
-/* Structure to define the configuration attributes for each Input queue. */
-struct lio_iq_config {
- /* Max number of IQs available */
- uint8_t max_iqs;
-
- /** Pending list size (usually set to the sum of the size of all Input
- * queues)
- */
- uint32_t pending_list_size;
-
- /** Command size - 32 or 64 bytes */
- uint32_t instr_type;
-};
-
-/* Structure to define the configuration attributes for each Output queue. */
-struct lio_oq_config {
- /* Max number of OQs available */
- uint8_t max_oqs;
-
- /** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
- uint32_t info_ptr;
-
- /** The number of buffers that were consumed during packet processing by
- * the driver on this Output queue before the driver attempts to
- * replenish the descriptor ring with new buffers.
- */
- uint32_t refill_threshold;
-};
-
-/* Structure to define the configuration. */
-struct lio_config {
- uint16_t card_type;
- const char *card_name;
-
- /** Input Queue attributes. */
- struct lio_iq_config iq;
-
- /** Output Queue attributes. */
- struct lio_oq_config oq;
-
- int num_nic_ports;
-
- int num_def_tx_descs;
-
- /* Num of desc for rx rings */
- int num_def_rx_descs;
-
- int def_rx_buf_size;
-};
-
-/** Status of a RGMII Link on Octeon as seen by core driver. */
-union octeon_link_status {
- uint64_t link_status64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t duplex : 8;
- uint64_t mtu : 16;
- uint64_t speed : 16;
- uint64_t link_up : 1;
- uint64_t autoneg : 1;
- uint64_t if_mode : 5;
- uint64_t pause : 1;
- uint64_t flashing : 1;
- uint64_t reserved : 15;
-#else
- uint64_t reserved : 15;
- uint64_t flashing : 1;
- uint64_t pause : 1;
- uint64_t if_mode : 5;
- uint64_t autoneg : 1;
- uint64_t link_up : 1;
- uint64_t speed : 16;
- uint64_t mtu : 16;
- uint64_t duplex : 8;
-#endif
- } s;
-};
-
-/** The rxpciq info passed to host from the firmware */
-union octeon_rxpciq {
- uint64_t rxpciq64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t q_no : 8;
- uint64_t reserved : 56;
-#else
- uint64_t reserved : 56;
- uint64_t q_no : 8;
-#endif
- } s;
-};
-
-/** Information for a OCTEON ethernet interface shared between core & host. */
-struct octeon_link_info {
- union octeon_link_status link;
- uint64_t hw_addr;
-
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t gmxport : 16;
- uint64_t macaddr_is_admin_assigned : 1;
- uint64_t vlan_is_admin_assigned : 1;
- uint64_t rsvd : 30;
- uint64_t num_txpciq : 8;
- uint64_t num_rxpciq : 8;
-#else
- uint64_t num_rxpciq : 8;
- uint64_t num_txpciq : 8;
- uint64_t rsvd : 30;
- uint64_t vlan_is_admin_assigned : 1;
- uint64_t macaddr_is_admin_assigned : 1;
- uint64_t gmxport : 16;
-#endif
-
- union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
- union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
-};
-
-/* ----------------------- THE LIO DEVICE --------------------------- */
-/** The lio device.
- * Each lio device has this structure to represent all its
- * components.
- */
-struct lio_device {
- /** PCI device pointer */
- struct rte_pci_device *pci_dev;
-
- /** Octeon Chip type */
- uint16_t chip_id;
- uint16_t pf_num;
- uint16_t vf_num;
-
- /** This device's PCIe port used for traffic. */
- uint16_t pcie_port;
-
- /** The state of this device */
- rte_atomic64_t status;
-
- uint8_t intf_open;
-
- struct octeon_link_info linfo;
-
- uint8_t *hw_addr;
-
- struct lio_fn_list fn_list;
-
- uint32_t num_iqs;
-
- /** Guards each glist */
- rte_spinlock_t *glist_lock;
- /** Array of gather component linked lists */
- struct lio_stailq_head *glist_head;
-
- /* The pool containing pre allocated buffers used for soft commands */
- struct rte_mempool *sc_buf_pool;
-
- /** The input instruction queues */
- struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
-
- /** The singly-linked tail queues of instruction response */
- struct lio_response_list response_list;
-
- uint32_t num_oqs;
-
- /** The DROQ output queues */
- struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
-
- struct lio_io_enable io_qmask;
-
- struct lio_sriov_info sriov_info;
-
- struct lio_pf_vf_hs_word pfvf_hsword;
-
- /** Mail Box details of each lio queue. */
- struct lio_mbox **mbox;
-
- char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
-
- const struct lio_config *default_config;
-
- struct rte_eth_dev *eth_dev;
-
- uint64_t ifflags;
- uint8_t max_rx_queues;
- uint8_t max_tx_queues;
- uint8_t nb_rx_queues;
- uint8_t nb_tx_queues;
- uint8_t port_configured;
- struct lio_rss_ctx rss_state;
- uint16_t port_id;
- char firmware_version[LIO_FW_VERSION_LENGTH];
-};
-#endif /* _LIO_STRUCT_H_ */
diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
deleted file mode 100644
index ebadbf3dea..0000000000
--- a/drivers/net/liquidio/meson.build
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-sources = files(
- 'base/lio_23xx_vf.c',
- 'base/lio_mbox.c',
- 'lio_ethdev.c',
- 'lio_rxtx.c',
-)
-includes += include_directories('base')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index b1df17ce8c..f68bbc27a7 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -36,7 +36,6 @@ drivers = [
'ipn3ke',
'ixgbe',
'kni',
- 'liquidio',
'mana',
'memif',
'mlx4',
--
2.40.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH v2] net/liquidio: remove LiquidIO ethdev driver
2023-05-08 13:44 ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
@ 2023-05-17 15:47 ` Jerin Jacob
0 siblings, 0 replies; 4+ messages in thread
From: Jerin Jacob @ 2023-05-17 15:47 UTC (permalink / raw)
To: jerinj
Cc: dev, Thomas Monjalon, Anatoly Burakov, david.marchand, ferruh.yigit
On Mon, May 8, 2023 at 7:15 PM <jerinj@marvell.com> wrote:
>
> From: Jerin Jacob <jerinj@marvell.com>
>
> The LiquidIO product line has been substituted with CN9K/CN10K
> OCTEON product line smart NICs located at drivers/net/octeon_ep/.
>
> DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> because of the absence of updates in the driver.
>
> Due to the above reasons, the driver removed from DPDK 23.07.
>
> Also removed deprecation notice entry for the removal in
> doc/guides/rel_notes/deprecation.rst and skipped removed
> driver file in ABI check in devtools/libabigail.abignore.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
> v2:
> - Skip driver ABI check (Ferruh)
> - Addressed the review comments in
> http://patches.dpdk.org/project/dpdk/patch/20230428103127.1059989-1-jerinj@marvell.com/ (Ferruh)
Applied to dpdk-next-net-mrvl/for-next-net. Thanks
>
> MAINTAINERS | 8 -
> devtools/libabigail.abignore | 1 +
> doc/guides/nics/features/liquidio.ini | 29 -
> doc/guides/nics/index.rst | 1 -
> doc/guides/nics/liquidio.rst | 169 --
> doc/guides/rel_notes/deprecation.rst | 7 -
> doc/guides/rel_notes/release_23_07.rst | 2 +
> drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
> drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
> drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
> drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
> drivers/net/liquidio/base/lio_mbox.c | 246 ---
> drivers/net/liquidio/base/lio_mbox.h | 102 -
> drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
> drivers/net/liquidio/lio_ethdev.h | 179 --
> drivers/net/liquidio/lio_logs.h | 58 -
> drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
> drivers/net/liquidio/lio_rxtx.h | 740 --------
> drivers/net/liquidio/lio_struct.h | 661 -------
> drivers/net/liquidio/meson.build | 16 -
> drivers/net/meson.build | 1 -
> 21 files changed, 3 insertions(+), 7148 deletions(-)
> delete mode 100644 doc/guides/nics/features/liquidio.ini
> delete mode 100644 doc/guides/nics/liquidio.rst
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
> delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
> delete mode 100644 drivers/net/liquidio/lio_ethdev.c
> delete mode 100644 drivers/net/liquidio/lio_ethdev.h
> delete mode 100644 drivers/net/liquidio/lio_logs.h
> delete mode 100644 drivers/net/liquidio/lio_rxtx.c
> delete mode 100644 drivers/net/liquidio/lio_rxtx.h
> delete mode 100644 drivers/net/liquidio/lio_struct.h
> delete mode 100644 drivers/net/liquidio/meson.build
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8df23e5099..0157c26dd2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -681,14 +681,6 @@ F: drivers/net/thunderx/
> F: doc/guides/nics/thunderx.rst
> F: doc/guides/nics/features/thunderx.ini
>
> -Cavium LiquidIO - UNMAINTAINED
> -M: Shijith Thotton <sthotton@marvell.com>
> -M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
> -T: git://dpdk.org/next/dpdk-next-net-mrvl
> -F: drivers/net/liquidio/
> -F: doc/guides/nics/liquidio.rst
> -F: doc/guides/nics/features/liquidio.ini
> -
> Cavium OCTEON TX
> M: Harman Kalra <hkalra@marvell.com>
> T: git://dpdk.org/next/dpdk-next-net-mrvl
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 3ff51509de..c0361bfc7b 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -25,6 +25,7 @@
> ;
> ; SKIP_LIBRARY=librte_common_mlx5_glue
> ; SKIP_LIBRARY=librte_net_mlx4_glue
> +; SKIP_LIBRARY=librte_net_liquidio
>
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> ; Experimental APIs exceptions ;
> diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
> deleted file mode 100644
> index a8bde282e0..0000000000
> --- a/doc/guides/nics/features/liquidio.ini
> +++ /dev/null
> @@ -1,29 +0,0 @@
> -;
> -; Supported features of the 'LiquidIO' network poll mode driver.
> -;
> -; Refer to default.ini for the full list of available PMD features.
> -;
> -[Features]
> -Speed capabilities = Y
> -Link status = Y
> -Link status event = Y
> -MTU update = Y
> -Scattered Rx = Y
> -Promiscuous mode = Y
> -Allmulticast mode = Y
> -RSS hash = Y
> -RSS key update = Y
> -RSS reta update = Y
> -VLAN filter = Y
> -CRC offload = Y
> -VLAN offload = P
> -L3 checksum offload = Y
> -L4 checksum offload = Y
> -Inner L3 checksum = Y
> -Inner L4 checksum = Y
> -Basic stats = Y
> -Extended stats = Y
> -Multiprocess aware = Y
> -Linux = Y
> -x86-64 = Y
> -Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 5c9d1edf5e..31296822e5 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -44,7 +44,6 @@ Network Interface Controller Drivers
> ipn3ke
> ixgbe
> kni
> - liquidio
> mana
> memif
> mlx4
> diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
> deleted file mode 100644
> index f893b3b539..0000000000
> --- a/doc/guides/nics/liquidio.rst
> +++ /dev/null
> @@ -1,169 +0,0 @@
> -.. SPDX-License-Identifier: BSD-3-Clause
> - Copyright(c) 2017 Cavium, Inc
> -
> -LiquidIO VF Poll Mode Driver
> -============================
> -
> -The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
> -Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
> -done using kernel driver.
> -
> -More information can be found at `Cavium Official Website
> -<http://cavium.com/LiquidIO_Adapters.html>`_.
> -
> -Supported LiquidIO Adapters
> ------------------------------
> -
> -- LiquidIO II CN2350 210SV/225SV
> -- LiquidIO II CN2350 210SVPT
> -- LiquidIO II CN2360 210SV/225SV
> -- LiquidIO II CN2360 210SVPT
> -
> -
> -SR-IOV: Prerequisites and Sample Application Notes
> ---------------------------------------------------
> -
> -This section provides instructions to configure SR-IOV with Linux OS.
> -
> -#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
> -
> - .. code-block:: console
> -
> - lspci -s <slot> -vvv
> -
> - Example output:
> -
> - .. code-block:: console
> -
> - [...]
> - Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
> - [...]
> - Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
> - [...]
> - Kernel driver in use: LiquidIO
> -
> -#. Load the kernel module:
> -
> - .. code-block:: console
> -
> - modprobe liquidio
> -
> -#. Bring up the PF ports:
> -
> - .. code-block:: console
> -
> - ifconfig p4p1 up
> - ifconfig p4p2 up
> -
> -#. Change PF MTU if required:
> -
> - .. code-block:: console
> -
> - ifconfig p4p1 mtu 9000
> - ifconfig p4p2 mtu 9000
> -
> -#. Create VF device(s):
> -
> - Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
> - of the parent PF.
> -
> - .. code-block:: console
> -
> - echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
> - echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
> -
> -#. Assign VF MAC address:
> -
> - Assign MAC address to the VF using iproute2 utility. The syntax is::
> -
> - ip link set <PF iface> vf <VF id> mac <macaddr>
> -
> - Example output:
> -
> - .. code-block:: console
> -
> - ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
> -
> -#. Assign VF(s) to VM.
> -
> - The VF devices may be passed through to the guest VM using qemu or
> - virt-manager or virsh etc.
> -
> - Example qemu guest launch command:
> -
> - .. code-block:: console
> -
> - ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
> - -cpu host -m 4096 -smp 4 \
> - -drive file=<disk_file>,if=none,id=disk1,format=<type> \
> - -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
> - -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
> -
> -#. Running testpmd
> -
> - Refer to the document
> - :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
> - ``testpmd`` application.
> -
> - .. note::
> -
> - Use ``igb_uio`` instead of ``vfio-pci`` in VM.
> -
> - Example output:
> -
> - .. code-block:: console
> -
> - [...]
> - EAL: PCI device 0000:03:00.3 on NUMA socket 0
> - EAL: probe driver: 177d:9712 net_liovf
> - EAL: using IOMMU type 1 (Type 1)
> - PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
> - EAL: PCI device 0000:03:08.3 on NUMA socket 0
> - EAL: probe driver: 177d:9712 net_liovf
> - PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
> - Interactive-mode selected
> - USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
> - Configuring Port 0 (socket 0)
> - PMD: net_liovf[03:00.3]INFO: Starting port 0
> - Port 0: F2:A8:1B:5E:B4:66
> - Configuring Port 1 (socket 0)
> - PMD: net_liovf[03:08.3]INFO: Starting port 1
> - Port 1: 32:76:CC:EE:56:D7
> - Checking link statuses...
> - Port 0 Link Up - speed 10000 Mbps - full-duplex
> - Port 1 Link Up - speed 10000 Mbps - full-duplex
> - Done
> - testpmd>
> -
> -#. Enabling VF promiscuous mode
> -
> - One VF per PF can be marked as trusted for promiscuous mode.
> -
> - .. code-block:: console
> -
> - ip link set dev <PF iface> vf <VF id> trust on
> -
> -
> -Limitations
> ------------
> -
> -VF MTU
> -~~~~~~
> -
> -VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
> -
> -VLAN offload
> -~~~~~~~~~~~~
> -
> -Tx VLAN insertion is not supported and consequently VLAN offload feature is
> -marked partial.
> -
> -Ring size
> -~~~~~~~~~
> -
> -Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
> -
> -CRC stripping
> -~~~~~~~~~~~~~
> -
> -LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index dcc1ca1696..8e1cdd677a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -121,13 +121,6 @@ Deprecation Notices
> * net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
> This decision has been made to alleviate the burden of maintaining a discontinued product.
>
> -* net/liquidio: Remove LiquidIO ethdev driver.
> - The LiquidIO product line has been substituted
> - with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
> - DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> - because of the absence of updates in the driver.
> - Due to the above reasons, the driver will be unavailable from DPDK 23.07.
> -
> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> to have another parameter ``qp_id`` to return the queue pair ID
> which got error interrupt to the application,
> diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..f13a7b32b6 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -68,6 +68,8 @@ Removed Items
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
> +
>
> API Changes
> -----------
> diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
> deleted file mode 100644
> index 9f28504b53..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_reg.h
> +++ /dev/null
> @@ -1,165 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_23XX_REG_H_
> -#define _LIO_23XX_REG_H_
> -
> -/* ###################### REQUEST QUEUE ######################### */
> -
> -/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
> -#define CN23XX_SLI_PKT_INSTR_BADDR_START64 0x10010
> -
> -/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
> -#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START 0x10020
> -
> -/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
> -#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START 0x10030
> -
> -/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
> -#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64 0x10040
> -
> -/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
> - * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
> - */
> -#define CN23XX_SLI_PKT_INPUT_CONTROL_START64 0x10000
> -
> -/* ------- Request Queue Macros --------- */
> -
> -/* Each Input Queue register is at a 16-byte Offset in BAR0 */
> -#define CN23XX_IQ_OFFSET 0x20000
> -
> -#define CN23XX_SLI_IQ_PKT_CONTROL64(iq) \
> - (CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_BASE_ADDR64(iq) \
> - (CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_SIZE(iq) \
> - (CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_DOORBELL(iq) \
> - (CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_INSTR_COUNT64(iq) \
> - (CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -/* Number of instructions to be read in one MAC read request.
> - * setting to Max value(4)
> - */
> -#define CN23XX_PKT_INPUT_CTL_RDSIZE (3 << 25)
> -#define CN23XX_PKT_INPUT_CTL_IS_64B (1 << 24)
> -#define CN23XX_PKT_INPUT_CTL_RST (1 << 23)
> -#define CN23XX_PKT_INPUT_CTL_QUIET (1 << 28)
> -#define CN23XX_PKT_INPUT_CTL_RING_ENB (1 << 22)
> -#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP (1 << 6)
> -#define CN23XX_PKT_INPUT_CTL_USE_CSR (1 << 4)
> -#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP (2)
> -
> -/* These bits[47:44] select the Physical function number within the MAC */
> -#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS 45
> -/* These bits[43:32] select the function number within the PF */
> -#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS 32
> -
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -#define CN23XX_PKT_INPUT_CTL_MASK \
> - (CN23XX_PKT_INPUT_CTL_RDSIZE | \
> - CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
> - CN23XX_PKT_INPUT_CTL_USE_CSR)
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -#define CN23XX_PKT_INPUT_CTL_MASK \
> - (CN23XX_PKT_INPUT_CTL_RDSIZE | \
> - CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
> - CN23XX_PKT_INPUT_CTL_USE_CSR | \
> - CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
> -#endif
> -
> -/* ############################ OUTPUT QUEUE ######################### */
> -
> -/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
> -#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START 0x10050
> -
> -/* 64 registers for Output queue buffer and info size
> - * SLI_PKT(0..63)_OUT_SIZE
> - */
> -#define CN23XX_SLI_PKT_OUT_SIZE 0x10060
> -
> -/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
> -#define CN23XX_SLI_SLIST_BADDR_START64 0x10070
> -
> -/* 64 registers for Output Queue Packet Credits
> - * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
> - */
> -#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START 0x10080
> -
> -/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
> -#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START 0x10090
> -
> -/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
> -#define CN23XX_SLI_PKT_CNTS_START 0x100B0
> -
> -/* Each Output Queue register is at a 16-byte Offset in BAR0 */
> -#define CN23XX_OQ_OFFSET 0x20000
> -
> -/* ------- Output Queue Macros --------- */
> -
> -#define CN23XX_SLI_OQ_PKT_CONTROL(oq) \
> - (CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_BASE_ADDR64(oq) \
> - (CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_SIZE(oq) \
> - (CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq) \
> - (CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_PKTS_SENT(oq) \
> - (CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_PKTS_CREDIT(oq) \
> - (CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -/* ------------------ Masks ---------------- */
> -#define CN23XX_PKT_OUTPUT_CTL_IPTR (1 << 11)
> -#define CN23XX_PKT_OUTPUT_CTL_ES (1 << 9)
> -#define CN23XX_PKT_OUTPUT_CTL_NSR (1 << 8)
> -#define CN23XX_PKT_OUTPUT_CTL_ROR (1 << 7)
> -#define CN23XX_PKT_OUTPUT_CTL_DPTR (1 << 6)
> -#define CN23XX_PKT_OUTPUT_CTL_BMODE (1 << 5)
> -#define CN23XX_PKT_OUTPUT_CTL_ES_P (1 << 3)
> -#define CN23XX_PKT_OUTPUT_CTL_NSR_P (1 << 2)
> -#define CN23XX_PKT_OUTPUT_CTL_ROR_P (1 << 1)
> -#define CN23XX_PKT_OUTPUT_CTL_RING_ENB (1 << 0)
> -
> -/* Rings per Virtual Function [RO] */
> -#define CN23XX_PKT_INPUT_CTL_RPVF_MASK 0x3F
> -#define CN23XX_PKT_INPUT_CTL_RPVF_POS 48
> -
> -/* These bits[47:44][RO] give the Physical function
> - * number info within the MAC
> - */
> -#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK 0x7
> -
> -/* These bits[43:32][RO] give the virtual function
> - * number info within the PF
> - */
> -#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK 0x1FFF
> -
> -/* ######################### Mailbox Reg Macros ######################## */
> -#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START 0x10200
> -#define CN23XX_VF_SLI_PKT_MBOX_INT_START 0x10210
> -
> -#define CN23XX_SLI_MBOX_OFFSET 0x20000
> -#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET 0x8
> -
> -#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx) \
> - (CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START + \
> - ((q) * CN23XX_SLI_MBOX_OFFSET + \
> - (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
> -
> -#define CN23XX_VF_SLI_PKT_MBOX_INT(q) \
> - (CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
> -
> -#endif /* _LIO_23XX_REG_H_ */
> diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
> deleted file mode 100644
> index c6b8310b71..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_vf.c
> +++ /dev/null
> @@ -1,513 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <string.h>
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -
> -#include "lio_logs.h"
> -#include "lio_23xx_vf.h"
> -#include "lio_23xx_reg.h"
> -#include "lio_mbox.h"
> -
> -static int
> -cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
> -{
> - uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
> - uint64_t d64, q_no;
> - int ret_val = 0;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - for (q_no = 0; q_no < num_queues; q_no++) {
> - /* set RST bit to 1. This bit applies to both IQ and OQ */
> - d64 = lio_read_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - d64);
> - }
> -
> - /* wait until the RST bit is clear or the RST and QUIET bits are set */
> - for (q_no = 0; q_no < num_queues; q_no++) {
> - volatile uint64_t reg_val;
> -
> - reg_val = lio_read_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
> - !(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
> - loop) {
> - reg_val = lio_read_csr64(
> - lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - loop = loop - 1;
> - }
> -
> - if (loop == 0) {
> - lio_dev_err(lio_dev,
> - "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
> - (unsigned long)q_no);
> - return -1;
> - }
> -
> - reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - reg_val);
> -
> - reg_val = lio_read_csr64(
> - lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
> - lio_dev_err(lio_dev,
> - "clearing the reset failed for qno: %lu\n",
> - (unsigned long)q_no);
> - ret_val = -1;
> - }
> - }
> -
> - return ret_val;
> -}
> -
> -static int
> -cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
> -{
> - uint64_t q_no;
> - uint64_t d64;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (cn23xx_vf_reset_io_queues(lio_dev,
> - lio_dev->sriov_info.rings_per_vf))
> - return -1;
> -
> - for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
> - 0xFFFFFFFF);
> -
> - d64 = lio_read_csr64(lio_dev,
> - CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
> -
> - d64 &= 0xEFFFFFFFFFFFFFFFL;
> -
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
> - d64);
> -
> - /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
> - * the Input Queues
> - */
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - CN23XX_PKT_INPUT_CTL_MASK);
> - }
> -
> - return 0;
> -}
> -
> -static void
> -cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
> -{
> - uint32_t reg_val;
> - uint32_t q_no;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
> - 0xFFFFFFFF);
> -
> - reg_val =
> - lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
> -
> - reg_val &= 0xEFFFFFFFFFFFFFFFL;
> -
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
> -
> - reg_val =
> - lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
> -
> - /* set IPTR & DPTR */
> - reg_val |=
> - (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
> -
> - /* reset BMODE */
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
> -
> - /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
> - * for Output Queue Scatter List
> - * reset ROR_P, NSR_P
> - */
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
> -
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
> -#endif
> - /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
> - * for Output Queue Data
> - * reset ROR, NSR
> - */
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
> - /* set the ES bit */
> - reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
> -
> - /* write all the selected settings */
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
> - reg_val);
> - }
> -}
> -
> -static int
> -cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
> -{
> - PMD_INIT_FUNC_TRACE();
> -
> - if (cn23xx_vf_setup_global_input_regs(lio_dev))
> - return -1;
> -
> - cn23xx_vf_setup_global_output_regs(lio_dev);
> -
> - return 0;
> -}
> -
> -static void
> -cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> - uint64_t pkt_in_done = 0;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* Write the start of the input queue's ring and its size */
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
> - iq->base_addr_dma);
> - lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
> -
> - /* Remember the doorbell & instruction count register addr
> - * for this queue
> - */
> - iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_IQ_DOORBELL(iq_no);
> - iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
> - lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
> - iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
> -
> - /* Store the current instruction counter (used in flush_iq
> - * calculation)
> - */
> - pkt_in_done = rte_read64(iq->inst_cnt_reg);
> -
> - /* Clear the count by writing back what we read, but don't
> - * enable data traffic here
> - */
> - rte_write64(pkt_in_done, iq->inst_cnt_reg);
> -}
> -
> -static void
> -cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
> -{
> - struct lio_droq *droq = lio_dev->droq[oq_no];
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
> - droq->desc_ring_dma);
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
> -
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
> - (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
> -
> - /* Get the mapped address of the pkt_sent and pkts_credit regs */
> - droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_OQ_PKTS_SENT(oq_no);
> - droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
> -}
> -
> -static void
> -cn23xx_vf_free_mbox(struct lio_device *lio_dev)
> -{
> - PMD_INIT_FUNC_TRACE();
> -
> - rte_free(lio_dev->mbox[0]);
> - lio_dev->mbox[0] = NULL;
> -
> - rte_free(lio_dev->mbox);
> - lio_dev->mbox = NULL;
> -}
> -
> -static int
> -cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
> -{
> - struct lio_mbox *mbox;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (lio_dev->mbox == NULL) {
> - lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
> - if (lio_dev->mbox == NULL)
> - return -ENOMEM;
> - }
> -
> - mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
> - if (mbox == NULL) {
> - rte_free(lio_dev->mbox);
> - lio_dev->mbox = NULL;
> - return -ENOMEM;
> - }
> -
> - rte_spinlock_init(&mbox->lock);
> -
> - mbox->lio_dev = lio_dev;
> -
> - mbox->q_no = 0;
> -
> - mbox->state = LIO_MBOX_STATE_IDLE;
> -
> - /* VF mbox interrupt reg */
> - mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_VF_SLI_PKT_MBOX_INT(0);
> - /* VF reads from SIG0 reg */
> - mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
> - /* VF writes into SIG1 reg */
> - mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
> -
> - lio_dev->mbox[0] = mbox;
> -
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -
> - return 0;
> -}
> -
> -static int
> -cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
> -{
> - uint32_t q_no;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
> - uint64_t reg_val;
> -
> - /* set the corresponding IQ IS_64B bit */
> - if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
> - reg_val = lio_read_csr64(
> - lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
> - lio_write_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - reg_val);
> - }
> -
> - /* set the corresponding IQ ENB bit */
> - if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
> - reg_val = lio_read_csr64(
> - lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
> - lio_write_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - reg_val);
> - }
> - }
> - for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
> - uint32_t reg_val;
> -
> - /* set the corresponding OQ ENB bit */
> - if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
> - reg_val = lio_read_csr(
> - lio_dev,
> - CN23XX_SLI_OQ_PKT_CONTROL(q_no));
> - reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
> - lio_write_csr(lio_dev,
> - CN23XX_SLI_OQ_PKT_CONTROL(q_no),
> - reg_val);
> - }
> - }
> -
> - return 0;
> -}
> -
> -static void
> -cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
> -{
> - uint32_t num_queues;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* per HRM, rings can only be disabled via reset operation,
> - * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
> - */
> - num_queues = lio_dev->num_iqs;
> - if (num_queues < lio_dev->num_oqs)
> - num_queues = lio_dev->num_oqs;
> -
> - cn23xx_vf_reset_io_queues(lio_dev, num_queues);
> -}
> -
> -void
> -cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
> -{
> - struct lio_mbox_cmd mbox_cmd;
> -
> - memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
> - mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
> - mbox_cmd.msg.s.resp_needed = 0;
> - mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
> - mbox_cmd.msg.s.len = 1;
> - mbox_cmd.q_no = 0;
> - mbox_cmd.recv_len = 0;
> - mbox_cmd.recv_status = 0;
> - mbox_cmd.fn = NULL;
> - mbox_cmd.fn_arg = 0;
> -
> - lio_mbox_write(lio_dev, &mbox_cmd);
> -}
> -
> -static void
> -cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
> - struct lio_mbox_cmd *cmd, void *arg)
> -{
> - uint32_t major = 0;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
> - if (cmd->recv_len > 1) {
> - struct lio_version *lio_ver = (struct lio_version *)cmd->data;
> -
> - major = lio_ver->major;
> - major = major << 16;
> - }
> -
> - rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
> -}
> -
> -int
> -cn23xx_pfvf_handshake(struct lio_device *lio_dev)
> -{
> - struct lio_mbox_cmd mbox_cmd;
> - struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
> - uint32_t q_no, count = 0;
> - rte_atomic64_t status;
> - uint32_t pfmajor;
> - uint32_t vfmajor;
> - uint32_t ret;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* Sending VF_ACTIVE indication to the PF driver */
> - lio_dev_dbg(lio_dev, "requesting info from PF\n");
> -
> - mbox_cmd.msg.mbox_msg64 = 0;
> - mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
> - mbox_cmd.msg.s.resp_needed = 1;
> - mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
> - mbox_cmd.msg.s.len = 2;
> - mbox_cmd.data[0] = 0;
> - lio_ver->major = LIO_BASE_MAJOR_VERSION;
> - lio_ver->minor = LIO_BASE_MINOR_VERSION;
> - lio_ver->micro = LIO_BASE_MICRO_VERSION;
> - mbox_cmd.q_no = 0;
> - mbox_cmd.recv_len = 0;
> - mbox_cmd.recv_status = 0;
> - mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
> - mbox_cmd.fn_arg = (void *)&status;
> -
> - if (lio_mbox_write(lio_dev, &mbox_cmd)) {
> - lio_dev_err(lio_dev, "Write to mailbox failed\n");
> - return -1;
> - }
> -
> - rte_atomic64_set(&status, 0);
> -
> - do {
> - rte_delay_ms(1);
> - } while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
> -
> - ret = rte_atomic64_read(&status);
> - if (ret == 0) {
> - lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
> - return -1;
> - }
> -
> - for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
> - lio_dev->instr_queue[q_no]->txpciq.s.pkind =
> - lio_dev->pfvf_hsword.pkind;
> -
> - vfmajor = LIO_BASE_MAJOR_VERSION;
> - pfmajor = ret >> 16;
> - if (pfmajor != vfmajor) {
> - lio_dev_err(lio_dev,
> - "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
> - vfmajor, pfmajor);
> - ret = -EPERM;
> - } else {
> - lio_dev_dbg(lio_dev,
> - "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
> - vfmajor, pfmajor);
> - ret = 0;
> - }
> -
> - lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
> - lio_dev->pfvf_hsword.pkind);
> -
> - return ret;
> -}
> -
> -void
> -cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
> -{
> - uint64_t mbox_int_val;
> -
> - /* read and clear by writing 1 */
> - mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
> - rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
> - if (lio_mbox_read(lio_dev->mbox[0]))
> - lio_mbox_process_message(lio_dev->mbox[0]);
> -}
> -
> -int
> -cn23xx_vf_setup_device(struct lio_device *lio_dev)
> -{
> - uint64_t reg_val;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* INPUT_CONTROL[RPVF] gives the VF IOq count */
> - reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
> -
> - lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
> - CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
> - lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
> - CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
> -
> - reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
> -
> - lio_dev->sriov_info.rings_per_vf =
> - reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
> -
> - lio_dev->default_config = lio_get_conf(lio_dev);
> - if (lio_dev->default_config == NULL)
> - return -1;
> -
> - lio_dev->fn_list.setup_iq_regs = cn23xx_vf_setup_iq_regs;
> - lio_dev->fn_list.setup_oq_regs = cn23xx_vf_setup_oq_regs;
> - lio_dev->fn_list.setup_mbox = cn23xx_vf_setup_mbox;
> - lio_dev->fn_list.free_mbox = cn23xx_vf_free_mbox;
> -
> - lio_dev->fn_list.setup_device_regs = cn23xx_vf_setup_device_regs;
> -
> - lio_dev->fn_list.enable_io_queues = cn23xx_vf_enable_io_queues;
> - lio_dev->fn_list.disable_io_queues = cn23xx_vf_disable_io_queues;
> -
> - return 0;
> -}
> -
> diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
> deleted file mode 100644
> index 8e5362db15..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_vf.h
> +++ /dev/null
> @@ -1,63 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_23XX_VF_H_
> -#define _LIO_23XX_VF_H_
> -
> -#include <stdio.h>
> -
> -#include "lio_struct.h"
> -
> -static const struct lio_config default_cn23xx_conf = {
> - .card_type = LIO_23XX,
> - .card_name = LIO_23XX_NAME,
> - /** IQ attributes */
> - .iq = {
> - .max_iqs = CN23XX_CFG_IO_QUEUES,
> - .pending_list_size =
> - (CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
> - .instr_type = OCTEON_64BYTE_INSTR,
> - },
> -
> - /** OQ attributes */
> - .oq = {
> - .max_oqs = CN23XX_CFG_IO_QUEUES,
> - .info_ptr = OCTEON_OQ_INFOPTR_MODE,
> - .refill_threshold = CN23XX_OQ_REFIL_THRESHOLD,
> - },
> -
> - .num_nic_ports = CN23XX_DEFAULT_NUM_PORTS,
> - .num_def_rx_descs = CN23XX_MAX_OQ_DESCRIPTORS,
> - .num_def_tx_descs = CN23XX_MAX_IQ_DESCRIPTORS,
> - .def_rx_buf_size = CN23XX_OQ_BUF_SIZE,
> -};
> -
> -static inline const struct lio_config *
> -lio_get_conf(struct lio_device *lio_dev)
> -{
> - const struct lio_config *default_lio_conf = NULL;
> -
> - /* check the LIO Device model & return the corresponding lio
> - * configuration
> - */
> - default_lio_conf = &default_cn23xx_conf;
> -
> - if (default_lio_conf == NULL) {
> - lio_dev_err(lio_dev, "Configuration verification failed\n");
> - return NULL;
> - }
> -
> - return default_lio_conf;
> -}
> -
> -#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT 100000
> -
> -void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
> -
> -int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
> -
> -int cn23xx_vf_setup_device(struct lio_device *lio_dev);
> -
> -void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
> -#endif /* _LIO_23XX_VF_H_ */
> diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
> deleted file mode 100644
> index 5e119c1241..0000000000
> --- a/drivers/net/liquidio/base/lio_hw_defs.h
> +++ /dev/null
> @@ -1,239 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_HW_DEFS_H_
> -#define _LIO_HW_DEFS_H_
> -
> -#include <rte_io.h>
> -
> -#ifndef PCI_VENDOR_ID_CAVIUM
> -#define PCI_VENDOR_ID_CAVIUM 0x177D
> -#endif
> -
> -#define LIO_CN23XX_VF_VID 0x9712
> -
> -/* CN23xx subsystem device ids */
> -#define PCI_SUBSYS_DEV_ID_CN2350_210 0x0004
> -#define PCI_SUBSYS_DEV_ID_CN2360_210 0x0005
> -#define PCI_SUBSYS_DEV_ID_CN2360_225 0x0006
> -#define PCI_SUBSYS_DEV_ID_CN2350_225 0x0007
> -#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3 0x0008
> -#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3 0x0009
> -#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT 0x000a
> -#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT 0x000b
> -
> -/* --------------------------CONFIG VALUES------------------------ */
> -
> -/* CN23xx IQ configuration macros */
> -#define CN23XX_MAX_RINGS_PER_PF 64
> -#define CN23XX_MAX_RINGS_PER_VF 8
> -
> -#define CN23XX_MAX_INPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
> -#define CN23XX_MAX_IQ_DESCRIPTORS 512
> -#define CN23XX_MIN_IQ_DESCRIPTORS 128
> -
> -#define CN23XX_MAX_OUTPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
> -#define CN23XX_MAX_OQ_DESCRIPTORS 512
> -#define CN23XX_MIN_OQ_DESCRIPTORS 128
> -#define CN23XX_OQ_BUF_SIZE 1536
> -
> -#define CN23XX_OQ_REFIL_THRESHOLD 16
> -
> -#define CN23XX_DEFAULT_NUM_PORTS 1
> -
> -#define CN23XX_CFG_IO_QUEUES CN23XX_MAX_RINGS_PER_PF
> -
> -/* common OCTEON configuration macros */
> -#define OCTEON_64BYTE_INSTR 64
> -#define OCTEON_OQ_INFOPTR_MODE 1
> -
> -/* Max IOQs per LIO Link */
> -#define LIO_MAX_IOQS_PER_IF 64
> -
> -/* Wait time in milliseconds for FLR */
> -#define LIO_PCI_FLR_WAIT 100
> -
> -enum lio_card_type {
> - LIO_23XX /* 23xx */
> -};
> -
> -#define LIO_23XX_NAME "23xx"
> -
> -#define LIO_DEV_RUNNING 0xc
> -
> -#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg) \
> - ((cfg)->default_config->oq.refill_threshold)
> -#define LIO_NUM_DEF_TX_DESCS_CFG(cfg) \
> - ((cfg)->default_config->num_def_tx_descs)
> -
> -#define LIO_IQ_INSTR_TYPE(cfg) ((cfg)->default_config->iq.instr_type)
> -
> -/* The following config values are fixed and should not be modified. */
> -
> -/* Maximum number of Instruction queues */
> -#define LIO_MAX_INSTR_QUEUES(lio_dev) CN23XX_MAX_RINGS_PER_VF
> -
> -#define LIO_MAX_POSSIBLE_INSTR_QUEUES CN23XX_MAX_INPUT_QUEUES
> -#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES CN23XX_MAX_OUTPUT_QUEUES
> -
> -#define LIO_DEVICE_NAME_LEN 32
> -#define LIO_BASE_MAJOR_VERSION 1
> -#define LIO_BASE_MINOR_VERSION 5
> -#define LIO_BASE_MICRO_VERSION 1
> -
> -#define LIO_FW_VERSION_LENGTH 32
> -
> -#define LIO_Q_RECONF_MIN_VERSION "1.7.0"
> -#define LIO_VF_TRUST_MIN_VERSION "1.7.1"
> -
> -/** Tag types used by Octeon cores in its work. */
> -enum octeon_tag_type {
> - OCTEON_ORDERED_TAG = 0,
> - OCTEON_ATOMIC_TAG = 1,
> -};
> -
> -/* pre-defined host->NIC tag values */
> -#define LIO_CONTROL (0x11111110)
> -#define LIO_DATA(i) (0x11111111 + (i))
> -
> -/* used for NIC operations */
> -#define LIO_OPCODE 1
> -
> -/* Subcodes are used by host driver/apps to identify the sub-operation
> - * for the core. They only need to by unique for a given subsystem.
> - */
> -#define LIO_OPCODE_SUBCODE(op, sub) \
> - ((((op) & 0x0f) << 8) | ((sub) & 0x7f))
> -
> -/** LIO_OPCODE subcodes */
> -/* This subcode is sent by core PCI driver to indicate cores are ready. */
> -#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */
> -#define LIO_OPCODE_CMD 0x03
> -#define LIO_OPCODE_INFO 0x04
> -#define LIO_OPCODE_PORT_STATS 0x05
> -#define LIO_OPCODE_IF_CFG 0x09
> -
> -#define LIO_MIN_RX_BUF_SIZE 64
> -#define LIO_MAX_RX_PKTLEN (64 * 1024)
> -
> -/* NIC Command types */
> -#define LIO_CMD_CHANGE_MTU 0x1
> -#define LIO_CMD_CHANGE_DEVFLAGS 0x3
> -#define LIO_CMD_RX_CTL 0x4
> -#define LIO_CMD_CLEAR_STATS 0x6
> -#define LIO_CMD_SET_RSS 0xD
> -#define LIO_CMD_TNL_RX_CSUM_CTL 0x10
> -#define LIO_CMD_TNL_TX_CSUM_CTL 0x11
> -#define LIO_CMD_ADD_VLAN_FILTER 0x17
> -#define LIO_CMD_DEL_VLAN_FILTER 0x18
> -#define LIO_CMD_VXLAN_PORT_CONFIG 0x19
> -#define LIO_CMD_QUEUE_COUNT_CTL 0x1f
> -
> -#define LIO_CMD_VXLAN_PORT_ADD 0x0
> -#define LIO_CMD_VXLAN_PORT_DEL 0x1
> -#define LIO_CMD_RXCSUM_ENABLE 0x0
> -#define LIO_CMD_TXCSUM_ENABLE 0x0
> -
> -/* RX(packets coming from wire) Checksum verification flags */
> -/* TCP/UDP csum */
> -#define LIO_L4_CSUM_VERIFIED 0x1
> -#define LIO_IP_CSUM_VERIFIED 0x2
> -
> -/* RSS */
> -#define LIO_RSS_PARAM_DISABLE_RSS 0x10
> -#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED 0x08
> -#define LIO_RSS_PARAM_ITABLE_UNCHANGED 0x04
> -#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED 0x02
> -
> -#define LIO_RSS_HASH_IPV4 0x100
> -#define LIO_RSS_HASH_TCP_IPV4 0x200
> -#define LIO_RSS_HASH_IPV6 0x400
> -#define LIO_RSS_HASH_TCP_IPV6 0x1000
> -#define LIO_RSS_HASH_IPV6_EX 0x800
> -#define LIO_RSS_HASH_TCP_IPV6_EX 0x2000
> -
> -#define LIO_RSS_OFFLOAD_ALL ( \
> - LIO_RSS_HASH_IPV4 | \
> - LIO_RSS_HASH_TCP_IPV4 | \
> - LIO_RSS_HASH_IPV6 | \
> - LIO_RSS_HASH_TCP_IPV6 | \
> - LIO_RSS_HASH_IPV6_EX | \
> - LIO_RSS_HASH_TCP_IPV6_EX)
> -
> -#define LIO_RSS_MAX_TABLE_SZ 128
> -#define LIO_RSS_MAX_KEY_SZ 40
> -#define LIO_RSS_PARAM_SIZE 16
> -
> -/* Interface flags communicated between host driver and core app. */
> -enum lio_ifflags {
> - LIO_IFFLAG_PROMISC = 0x01,
> - LIO_IFFLAG_ALLMULTI = 0x02,
> - LIO_IFFLAG_UNICAST = 0x10
> -};
> -
> -/* Routines for reading and writing CSRs */
> -#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
> -#define lio_write_csr(lio_dev, reg_off, value) \
> - do { \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - typeof(value) _value = value; \
> - PMD_REGS_LOG(_dev, \
> - "Write32: Reg: 0x%08lx Val: 0x%08lx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long)_value); \
> - rte_write32(_value, _dev->hw_addr + _reg_off); \
> - } while (0)
> -
> -#define lio_write_csr64(lio_dev, reg_off, val64) \
> - do { \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - typeof(val64) _val64 = val64; \
> - PMD_REGS_LOG( \
> - _dev, \
> - "Write64: Reg: 0x%08lx Val: 0x%016llx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long long)_val64); \
> - rte_write64(_val64, _dev->hw_addr + _reg_off); \
> - } while (0)
> -
> -#define lio_read_csr(lio_dev, reg_off) \
> - ({ \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - uint32_t val = rte_read32(_dev->hw_addr + _reg_off); \
> - PMD_REGS_LOG(_dev, \
> - "Read32: Reg: 0x%08lx Val: 0x%08lx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long)val); \
> - val; \
> - })
> -
> -#define lio_read_csr64(lio_dev, reg_off) \
> - ({ \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off); \
> - PMD_REGS_LOG( \
> - _dev, \
> - "Read64: Reg: 0x%08lx Val: 0x%016llx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long long)val64); \
> - val64; \
> - })
> -#else
> -#define lio_write_csr(lio_dev, reg_off, value) \
> - rte_write32(value, (lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_write_csr64(lio_dev, reg_off, val64) \
> - rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_read_csr(lio_dev, reg_off) \
> - rte_read32((lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_read_csr64(lio_dev, reg_off) \
> - rte_read64((lio_dev)->hw_addr + (reg_off))
> -#endif
> -#endif /* _LIO_HW_DEFS_H_ */
> diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
> deleted file mode 100644
> index 2ac2b1b334..0000000000
> --- a/drivers/net/liquidio/base/lio_mbox.c
> +++ /dev/null
> @@ -1,246 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -
> -#include "lio_logs.h"
> -#include "lio_struct.h"
> -#include "lio_mbox.h"
> -
> -/**
> - * lio_mbox_read:
> - * @mbox: Pointer mailbox
> - *
> - * Reads the 8-bytes of data from the mbox register
> - * Writes back the acknowledgment indicating completion of read
> - */
> -int
> -lio_mbox_read(struct lio_mbox *mbox)
> -{
> - union lio_mbox_message msg;
> - int ret = 0;
> -
> - msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
> -
> - if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
> - return 0;
> -
> - if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
> - mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
> - msg.mbox_msg64;
> - mbox->mbox_req.recv_len++;
> - } else {
> - if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
> - mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
> - msg.mbox_msg64;
> - mbox->mbox_resp.recv_len++;
> - } else {
> - if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
> - (msg.s.type == LIO_MBOX_REQUEST)) {
> - mbox->state &= ~LIO_MBOX_STATE_IDLE;
> - mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
> - mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
> - mbox->mbox_req.q_no = mbox->q_no;
> - mbox->mbox_req.recv_len = 1;
> - } else {
> - if ((mbox->state &
> - LIO_MBOX_STATE_RES_PENDING) &&
> - (msg.s.type == LIO_MBOX_RESPONSE)) {
> - mbox->state &=
> - ~LIO_MBOX_STATE_RES_PENDING;
> - mbox->state |=
> - LIO_MBOX_STATE_RES_RECEIVING;
> - mbox->mbox_resp.msg.mbox_msg64 =
> - msg.mbox_msg64;
> - mbox->mbox_resp.q_no = mbox->q_no;
> - mbox->mbox_resp.recv_len = 1;
> - } else {
> - rte_write64(LIO_PFVFERR,
> - mbox->mbox_read_reg);
> - mbox->state |= LIO_MBOX_STATE_ERROR;
> - return -1;
> - }
> - }
> - }
> - }
> -
> - if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
> - if (mbox->mbox_req.recv_len < msg.s.len) {
> - ret = 0;
> - } else {
> - mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
> - mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
> - ret = 1;
> - }
> - } else {
> - if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
> - if (mbox->mbox_resp.recv_len < msg.s.len) {
> - ret = 0;
> - } else {
> - mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
> - mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
> - ret = 1;
> - }
> - } else {
> - RTE_ASSERT(0);
> - }
> - }
> -
> - rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
> -
> - return ret;
> -}
> -
> -/**
> - * lio_mbox_write:
> - * @lio_dev: Pointer lio device
> - * @mbox_cmd: Cmd to send to mailbox.
> - *
> - * Populates the queue specific mbox structure
> - * with cmd information.
> - * Write the cmd to mbox register
> - */
> -int
> -lio_mbox_write(struct lio_device *lio_dev,
> - struct lio_mbox_cmd *mbox_cmd)
> -{
> - struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
> - uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
> -
> - if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
> - !(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
> - return LIO_MBOX_STATUS_FAILED;
> -
> - if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
> - !(mbox->state & LIO_MBOX_STATE_IDLE))
> - return LIO_MBOX_STATUS_BUSY;
> -
> - if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
> - rte_memcpy(&mbox->mbox_resp, mbox_cmd,
> - sizeof(struct lio_mbox_cmd));
> - mbox->state = LIO_MBOX_STATE_RES_PENDING;
> - }
> -
> - count = 0;
> -
> - while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
> - rte_delay_ms(1);
> - if (count++ == 1000) {
> - ret = LIO_MBOX_STATUS_FAILED;
> - break;
> - }
> - }
> -
> - if (ret == LIO_MBOX_STATUS_SUCCESS) {
> - rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
> - for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
> - count = 0;
> - while (rte_read64(mbox->mbox_write_reg) !=
> - LIO_PFVFACK) {
> - rte_delay_ms(1);
> - if (count++ == 1000) {
> - ret = LIO_MBOX_STATUS_FAILED;
> - break;
> - }
> - }
> - rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
> - }
> - }
> -
> - if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - } else {
> - if ((!mbox_cmd->msg.s.resp_needed) ||
> - (ret == LIO_MBOX_STATUS_FAILED)) {
> - mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
> - if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
> - LIO_MBOX_STATE_REQ_RECEIVED)))
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - }
> - }
> -
> - return ret;
> -}
> -
> -/**
> - * lio_mbox_process_cmd:
> - * @mbox: Pointer mailbox
> - * @mbox_cmd: Pointer to command received
> - *
> - * Process the cmd received in mbox
> - */
> -static int
> -lio_mbox_process_cmd(struct lio_mbox *mbox,
> - struct lio_mbox_cmd *mbox_cmd)
> -{
> - struct lio_device *lio_dev = mbox->lio_dev;
> -
> - if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
> - lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
> -
> - return 0;
> -}
> -
> -/**
> - * Process the received mbox message.
> - */
> -int
> -lio_mbox_process_message(struct lio_mbox *mbox)
> -{
> - struct lio_mbox_cmd mbox_cmd;
> -
> - if (mbox->state & LIO_MBOX_STATE_ERROR) {
> - if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
> - LIO_MBOX_STATE_RES_RECEIVING)) {
> - rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
> - sizeof(struct lio_mbox_cmd));
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - mbox_cmd.recv_status = 1;
> - if (mbox_cmd.fn)
> - mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
> - mbox_cmd.fn_arg);
> -
> - return 0;
> - }
> -
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -
> - return 0;
> - }
> -
> - if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
> - rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
> - sizeof(struct lio_mbox_cmd));
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - mbox_cmd.recv_status = 0;
> - if (mbox_cmd.fn)
> - mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
> -
> - return 0;
> - }
> -
> - if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
> - rte_memcpy(&mbox_cmd, &mbox->mbox_req,
> - sizeof(struct lio_mbox_cmd));
> - if (!mbox_cmd.msg.s.resp_needed) {
> - mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
> - if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - }
> -
> - lio_mbox_process_cmd(mbox, &mbox_cmd);
> -
> - return 0;
> - }
> -
> - RTE_ASSERT(0);
> -
> - return 0;
> -}
> diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
> deleted file mode 100644
> index 457917e91f..0000000000
> --- a/drivers/net/liquidio/base/lio_mbox.h
> +++ /dev/null
> @@ -1,102 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_MBOX_H_
> -#define _LIO_MBOX_H_
> -
> -#include <stdint.h>
> -
> -#include <rte_spinlock.h>
> -
> -/* Macros for Mail Box Communication */
> -
> -#define LIO_MBOX_DATA_MAX 32
> -
> -#define LIO_VF_ACTIVE 0x1
> -#define LIO_VF_FLR_REQUEST 0x2
> -#define LIO_CORES_CRASHED 0x3
> -
> -/* Macro for Read acknowledgment */
> -#define LIO_PFVFACK 0xffffffffffffffff
> -#define LIO_PFVFSIG 0x1122334455667788
> -#define LIO_PFVFERR 0xDEADDEADDEADDEAD
> -
> -enum lio_mbox_cmd_status {
> - LIO_MBOX_STATUS_SUCCESS = 0,
> - LIO_MBOX_STATUS_FAILED = 1,
> - LIO_MBOX_STATUS_BUSY = 2
> -};
> -
> -enum lio_mbox_message_type {
> - LIO_MBOX_REQUEST = 0,
> - LIO_MBOX_RESPONSE = 1
> -};
> -
> -union lio_mbox_message {
> - uint64_t mbox_msg64;
> - struct {
> - uint16_t type : 1;
> - uint16_t resp_needed : 1;
> - uint16_t cmd : 6;
> - uint16_t len : 8;
> - uint8_t params[6];
> - } s;
> -};
> -
> -typedef void (*lio_mbox_callback)(void *, void *, void *);
> -
> -struct lio_mbox_cmd {
> - union lio_mbox_message msg;
> - uint64_t data[LIO_MBOX_DATA_MAX];
> - uint32_t q_no;
> - uint32_t recv_len;
> - uint32_t recv_status;
> - lio_mbox_callback fn;
> - void *fn_arg;
> -};
> -
> -enum lio_mbox_state {
> - LIO_MBOX_STATE_IDLE = 1,
> - LIO_MBOX_STATE_REQ_RECEIVING = 2,
> - LIO_MBOX_STATE_REQ_RECEIVED = 4,
> - LIO_MBOX_STATE_RES_PENDING = 8,
> - LIO_MBOX_STATE_RES_RECEIVING = 16,
> - LIO_MBOX_STATE_RES_RECEIVED = 16,
> - LIO_MBOX_STATE_ERROR = 32
> -};
> -
> -struct lio_mbox {
> - /* A spinlock to protect access to this q_mbox. */
> - rte_spinlock_t lock;
> -
> - struct lio_device *lio_dev;
> -
> - uint32_t q_no;
> -
> - enum lio_mbox_state state;
> -
> - /* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
> - void *mbox_int_reg;
> -
> - /* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
> - * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
> - */
> - void *mbox_write_reg;
> -
> - /* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
> - * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
> - */
> - void *mbox_read_reg;
> -
> - struct lio_mbox_cmd mbox_req;
> -
> - struct lio_mbox_cmd mbox_resp;
> -
> -};
> -
> -int lio_mbox_read(struct lio_mbox *mbox);
> -int lio_mbox_write(struct lio_device *lio_dev,
> - struct lio_mbox_cmd *mbox_cmd);
> -int lio_mbox_process_message(struct lio_mbox *mbox);
> -#endif /* _LIO_MBOX_H_ */
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> deleted file mode 100644
> index ebcfbb1a5c..0000000000
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ /dev/null
> @@ -1,2147 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <rte_string_fns.h>
> -#include <ethdev_driver.h>
> -#include <ethdev_pci.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -#include <rte_alarm.h>
> -#include <rte_ether.h>
> -
> -#include "lio_logs.h"
> -#include "lio_23xx_vf.h"
> -#include "lio_ethdev.h"
> -#include "lio_rxtx.h"
> -
> -/* Default RSS key in use */
> -static uint8_t lio_rss_key[40] = {
> - 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
> - 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
> - 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
> - 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
> - 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
> -};
> -
> -static const struct rte_eth_desc_lim lio_rx_desc_lim = {
> - .nb_max = CN23XX_MAX_OQ_DESCRIPTORS,
> - .nb_min = CN23XX_MIN_OQ_DESCRIPTORS,
> - .nb_align = 1,
> -};
> -
> -static const struct rte_eth_desc_lim lio_tx_desc_lim = {
> - .nb_max = CN23XX_MAX_IQ_DESCRIPTORS,
> - .nb_min = CN23XX_MIN_IQ_DESCRIPTORS,
> - .nb_align = 1,
> -};
> -
> -/* Wait for control command to reach nic. */
> -static uint16_t
> -lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
> - struct lio_dev_ctrl_cmd *ctrl_cmd)
> -{
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -
> - while ((ctrl_cmd->cond == 0) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> - rte_delay_ms(1);
> - }
> -
> - return !timeout;
> -}
> -
> -/**
> - * \brief Send Rx control command
> - * @param eth_dev Pointer to the structure rte_eth_dev
> - * @param start_stop whether to start or stop
> - */
> -static int
> -lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
> - ctrl_pkt.ncmd.s.param1 = start_stop;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send RX Control message\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "RX Control command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/* store statistics names and its offset in stats structure */
> -struct rte_lio_xstats_name_off {
> - char name[RTE_ETH_XSTATS_NAME_SIZE];
> - unsigned int offset;
> -};
> -
> -static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
> - {"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
> - {"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
> - {"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
> - {"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
> - {"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
> - {"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
> - {"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
> - {"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
> - {"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
> - {"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
> - {"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
> - {"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
> - {"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
> - {"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_broadcast_pkts",
> - (offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_multicast_pkts",
> - (offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_total_collisions", (offsetof(struct octeon_tx_stats,
> - total_collisions)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
> - sizeof(struct octeon_rx_stats)},
> -};
> -
> -#define LIO_NB_XSTATS RTE_DIM(rte_lio_stats_strings)
> -
> -/* Get hw stats of the port */
> -static int
> -lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
> - unsigned int n)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - struct octeon_link_stats *hw_stats;
> - struct lio_link_stats_resp *resp;
> - struct lio_soft_command *sc;
> - uint32_t resp_size;
> - unsigned int i;
> - int retval;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - if (n < LIO_NB_XSTATS)
> - return LIO_NB_XSTATS;
> -
> - resp_size = sizeof(struct lio_link_stats_resp);
> - sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> - if (sc == NULL)
> - return -ENOMEM;
> -
> - resp = (struct lio_link_stats_resp *)sc->virtrptr;
> - lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> - LIO_OPCODE_PORT_STATS, 0, 0, 0);
> -
> - /* Setting wait time in seconds */
> - sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> - retval = lio_send_soft_command(lio_dev, sc);
> - if (retval == LIO_IQ_SEND_FAILED) {
> - lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
> - retval);
> - goto get_stats_fail;
> - }
> -
> - while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> - lio_process_ordered_list(lio_dev);
> - rte_delay_ms(1);
> - }
> -
> - retval = resp->status;
> - if (retval) {
> - lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
> - goto get_stats_fail;
> - }
> -
> - lio_swap_8B_data((uint64_t *)(&resp->link_stats),
> - sizeof(struct octeon_link_stats) >> 3);
> -
> - hw_stats = &resp->link_stats;
> -
> - for (i = 0; i < LIO_NB_XSTATS; i++) {
> - xstats[i].id = i;
> - xstats[i].value =
> - *(uint64_t *)(((char *)hw_stats) +
> - rte_lio_stats_strings[i].offset);
> - }
> -
> - lio_free_soft_command(sc);
> -
> - return LIO_NB_XSTATS;
> -
> -get_stats_fail:
> - lio_free_soft_command(sc);
> -
> - return -1;
> -}
> -
> -static int
> -lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
> - struct rte_eth_xstat_name *xstats_names,
> - unsigned limit __rte_unused)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - unsigned int i;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - if (xstats_names == NULL)
> - return LIO_NB_XSTATS;
> -
> - /* Note: limit checked in rte_eth_xstats_names() */
> -
> - for (i = 0; i < LIO_NB_XSTATS; i++) {
> - snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
> - "%s", rte_lio_stats_strings[i].name);
> - }
> -
> - return LIO_NB_XSTATS;
> -}
> -
> -/* Reset hw stats for the port */
> -static int
> -lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> - int ret;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
> - if (ret != 0) {
> - lio_dev_err(lio_dev, "Failed to send clear stats command\n");
> - return ret;
> - }
> -
> - ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
> - if (ret != 0) {
> - lio_dev_err(lio_dev, "Clear stats command timed out\n");
> - return ret;
> - }
> -
> - /* clear stored per queue stats */
> - if (*eth_dev->dev_ops->stats_reset == NULL)
> - return 0;
> - return (*eth_dev->dev_ops->stats_reset)(eth_dev);
> -}
> -
> -/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
> -static int
> -lio_dev_stats_get(struct rte_eth_dev *eth_dev,
> - struct rte_eth_stats *stats)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_droq_stats *oq_stats;
> - struct lio_iq_stats *iq_stats;
> - struct lio_instr_queue *txq;
> - struct lio_droq *droq;
> - int i, iq_no, oq_no;
> - uint64_t bytes = 0;
> - uint64_t pkts = 0;
> - uint64_t drop = 0;
> -
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - iq_no = lio_dev->linfo.txpciq[i].s.q_no;
> - txq = lio_dev->instr_queue[iq_no];
> - if (txq != NULL) {
> - iq_stats = &txq->stats;
> - pkts += iq_stats->tx_done;
> - drop += iq_stats->tx_dropped;
> - bytes += iq_stats->tx_tot_bytes;
> - }
> - }
> -
> - stats->opackets = pkts;
> - stats->obytes = bytes;
> - stats->oerrors = drop;
> -
> - pkts = 0;
> - drop = 0;
> - bytes = 0;
> -
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
> - droq = lio_dev->droq[oq_no];
> - if (droq != NULL) {
> - oq_stats = &droq->stats;
> - pkts += oq_stats->rx_pkts_received;
> - drop += (oq_stats->rx_dropped +
> - oq_stats->dropped_toomany +
> - oq_stats->dropped_nomem);
> - bytes += oq_stats->rx_bytes_received;
> - }
> - }
> - stats->ibytes = bytes;
> - stats->ipackets = pkts;
> - stats->ierrors = drop;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_droq_stats *oq_stats;
> - struct lio_iq_stats *iq_stats;
> - struct lio_instr_queue *txq;
> - struct lio_droq *droq;
> - int i, iq_no, oq_no;
> -
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - iq_no = lio_dev->linfo.txpciq[i].s.q_no;
> - txq = lio_dev->instr_queue[iq_no];
> - if (txq != NULL) {
> - iq_stats = &txq->stats;
> - memset(iq_stats, 0, sizeof(struct lio_iq_stats));
> - }
> - }
> -
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
> - droq = lio_dev->droq[oq_no];
> - if (droq != NULL) {
> - oq_stats = &droq->stats;
> - memset(oq_stats, 0, sizeof(struct lio_droq_stats));
> - }
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_info_get(struct rte_eth_dev *eth_dev,
> - struct rte_eth_dev_info *devinfo)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> -
> - switch (pci_dev->id.subsystem_device_id) {
> - /* CN23xx 10G cards */
> - case PCI_SUBSYS_DEV_ID_CN2350_210:
> - case PCI_SUBSYS_DEV_ID_CN2360_210:
> - case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
> - case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
> - case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
> - case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
> - devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
> - break;
> - /* CN23xx 25G cards */
> - case PCI_SUBSYS_DEV_ID_CN2350_225:
> - case PCI_SUBSYS_DEV_ID_CN2360_225:
> - devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
> - break;
> - default:
> - devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
> - lio_dev_err(lio_dev,
> - "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
> - return -EINVAL;
> - }
> -
> - devinfo->max_rx_queues = lio_dev->max_rx_queues;
> - devinfo->max_tx_queues = lio_dev->max_tx_queues;
> -
> - devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
> - devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
> -
> - devinfo->max_mac_addrs = 1;
> -
> - devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> - RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
> - RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
> - RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> - RTE_ETH_RX_OFFLOAD_RSS_HASH);
> - devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
> - RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
> - RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
> - RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
> -
> - devinfo->rx_desc_lim = lio_rx_desc_lim;
> - devinfo->tx_desc_lim = lio_tx_desc_lim;
> -
> - devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
> - devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
> - devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
> - RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> - RTE_ETH_RSS_IPV6 |
> - RTE_ETH_RSS_NONFRAG_IPV6_TCP |
> - RTE_ETH_RSS_IPV6_EX |
> - RTE_ETH_RSS_IPV6_TCP_EX);
> - return 0;
> -}
> -
> -static int
> -lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
> - ctrl_pkt.ncmd.s.param1 = mtu;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Command to change MTU timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_reta_entry64 *reta_conf,
> - uint16_t reta_size)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct lio_rss_set *rss_param;
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> - int i, j, index;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
> - lio_dev_err(lio_dev,
> - "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
> - reta_size, LIO_RSS_MAX_TABLE_SZ);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
> - ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - rss_param->param.flags = 0xF;
> - rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
> - rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
> -
> - for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
> - for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
> - if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
> - index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
> - rss_state->itable[index] = reta_conf[i].reta[j];
> - }
> - }
> - }
> -
> - rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
> - memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
> -
> - lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to set rss hash\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Set rss hash timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_reta_entry64 *reta_conf,
> - uint16_t reta_size)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - int i, num;
> -
> - if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
> - lio_dev_err(lio_dev,
> - "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
> - reta_size, LIO_RSS_MAX_TABLE_SZ);
> - return -EINVAL;
> - }
> -
> - num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
> -
> - for (i = 0; i < num; i++) {
> - memcpy(reta_conf->reta,
> - &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
> - RTE_ETH_RETA_GROUP_SIZE);
> - reta_conf++;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_conf *rss_conf)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - uint8_t *hash_key = NULL;
> - uint64_t rss_hf = 0;
> -
> - if (rss_state->hash_disable) {
> - lio_dev_info(lio_dev, "RSS disabled in nic\n");
> - rss_conf->rss_hf = 0;
> - return 0;
> - }
> -
> - /* Get key value */
> - hash_key = rss_conf->rss_key;
> - if (hash_key != NULL)
> - memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
> -
> - if (rss_state->ip)
> - rss_hf |= RTE_ETH_RSS_IPV4;
> - if (rss_state->tcp_hash)
> - rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
> - if (rss_state->ipv6)
> - rss_hf |= RTE_ETH_RSS_IPV6;
> - if (rss_state->ipv6_tcp_hash)
> - rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
> - if (rss_state->ipv6_ex)
> - rss_hf |= RTE_ETH_RSS_IPV6_EX;
> - if (rss_state->ipv6_tcp_ex_hash)
> - rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
> -
> - rss_conf->rss_hf = rss_hf;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_conf *rss_conf)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct lio_rss_set *rss_param;
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
> - ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - rss_param->param.flags = 0xF;
> -
> - if (rss_conf->rss_key) {
> - rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
> - rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
> - rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
> - memcpy(rss_state->hash_key, rss_conf->rss_key,
> - rss_state->hash_key_size);
> - memcpy(rss_param->key, rss_state->hash_key,
> - rss_state->hash_key_size);
> - }
> -
> - if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
> - /* Can't disable rss through hash flags,
> - * if it is enabled by default during init
> - */
> - if (!rss_state->hash_disable)
> - return -EINVAL;
> -
> - /* This is for --disable-rss during testpmd launch */
> - rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
> - } else {
> - uint32_t hashinfo = 0;
> -
> - /* Can't enable rss if disabled by default during init */
> - if (rss_state->hash_disable)
> - return -EINVAL;
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
> - hashinfo |= LIO_RSS_HASH_IPV4;
> - rss_state->ip = 1;
> - } else {
> - rss_state->ip = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
> - hashinfo |= LIO_RSS_HASH_TCP_IPV4;
> - rss_state->tcp_hash = 1;
> - } else {
> - rss_state->tcp_hash = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
> - hashinfo |= LIO_RSS_HASH_IPV6;
> - rss_state->ipv6 = 1;
> - } else {
> - rss_state->ipv6 = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
> - hashinfo |= LIO_RSS_HASH_TCP_IPV6;
> - rss_state->ipv6_tcp_hash = 1;
> - } else {
> - rss_state->ipv6_tcp_hash = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
> - hashinfo |= LIO_RSS_HASH_IPV6_EX;
> - rss_state->ipv6_ex = 1;
> - } else {
> - rss_state->ipv6_ex = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
> - hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
> - rss_state->ipv6_tcp_ex_hash = 1;
> - } else {
> - rss_state->ipv6_tcp_ex_hash = 0;
> - }
> -
> - rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
> - rss_param->param.hashinfo = hashinfo;
> - }
> -
> - lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to set rss hash\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Set rss hash timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> - * Add vxlan dest udp port for an interface.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - * @param udp_tnl
> - * udp tunnel conf
> - *
> - * @return
> - * On success return 0
> - * On failure return -1
> - */
> -static int
> -lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
> - struct rte_eth_udp_tunnel *udp_tnl)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (udp_tnl == NULL)
> - return -EINVAL;
> -
> - if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
> - lio_dev_err(lio_dev, "Unsupported tunnel type\n");
> - return -1;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
> - ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
> - ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> - * Remove vxlan dest udp port for an interface.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - * @param udp_tnl
> - * udp tunnel conf
> - *
> - * @return
> - * On success return 0
> - * On failure return -1
> - */
> -static int
> -lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
> - struct rte_eth_udp_tunnel *udp_tnl)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (udp_tnl == NULL)
> - return -EINVAL;
> -
> - if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
> - lio_dev_err(lio_dev, "Unsupported tunnel type\n");
> - return -1;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
> - ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
> - ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (lio_dev->linfo.vlan_is_admin_assigned)
> - return -EPERM;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = on ?
> - LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
> - ctrl_pkt.ncmd.s.param1 = vlan_id;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
> - on ? "add" : "remove");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
> - on ? "add" : "remove");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static uint64_t
> -lio_hweight64(uint64_t w)
> -{
> - uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
> -
> - res =
> - (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
> - res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
> - res = res + (res >> 8);
> - res = res + (res >> 16);
> -
> - return (res + (res >> 32)) & 0x00000000000000FFul;
> -}
> -
> -static int
> -lio_dev_link_update(struct rte_eth_dev *eth_dev,
> - int wait_to_complete __rte_unused)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct rte_eth_link link;
> -
> - /* Initialize */
> - memset(&link, 0, sizeof(link));
> - link.link_status = RTE_ETH_LINK_DOWN;
> - link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> - link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> - link.link_autoneg = RTE_ETH_LINK_AUTONEG;
> -
> - /* Return what we found */
> - if (lio_dev->linfo.link.s.link_up == 0) {
> - /* Interface is down */
> - return rte_eth_linkstatus_set(eth_dev, &link);
> - }
> -
> - link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
> - link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> - switch (lio_dev->linfo.link.s.speed) {
> - case LIO_LINK_SPEED_10000:
> - link.link_speed = RTE_ETH_SPEED_NUM_10G;
> - break;
> - case LIO_LINK_SPEED_25000:
> - link.link_speed = RTE_ETH_SPEED_NUM_25G;
> - break;
> - default:
> - link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> - link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> - }
> -
> - return rte_eth_linkstatus_set(eth_dev, &link);
> -}
> -
> -/**
> - * \brief Net device enable, disable allmulticast
> - * @param eth_dev Pointer to the structure rte_eth_dev
> - *
> - * @return
> - * On success return 0
> - * On failure return negative errno
> - */
> -static int
> -lio_change_dev_flag(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - /* Create a ctrl pkt command to be sent to core app. */
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
> - ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send change flag message\n");
> - return -EAGAIN;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Change dev flag command timed out\n");
> - return -ETIMEDOUT;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
> - lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> - LIO_VF_TRUST_MIN_VERSION);
> - return -EAGAIN;
> - }
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
> - lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> - LIO_VF_TRUST_MIN_VERSION);
> - return -EAGAIN;
> - }
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static void
> -lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct rte_eth_rss_reta_entry64 reta_conf[8];
> - struct rte_eth_rss_conf rss_conf;
> - uint16_t i;
> -
> - /* Configure the RSS key and the RSS protocols used to compute
> - * the RSS hash of input packets.
> - */
> - rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
> - if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
> - rss_state->hash_disable = 1;
> - lio_dev_rss_hash_update(eth_dev, &rss_conf);
> - return;
> - }
> -
> - if (rss_conf.rss_key == NULL)
> - rss_conf.rss_key = lio_rss_key; /* Default hash key */
> -
> - lio_dev_rss_hash_update(eth_dev, &rss_conf);
> -
> - memset(reta_conf, 0, sizeof(reta_conf));
> - for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
> - uint8_t q_idx, conf_idx, reta_idx;
> -
> - q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
> - i % eth_dev->data->nb_rx_queues : 0);
> - conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
> - reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
> - reta_conf[conf_idx].reta[reta_idx] = q_idx;
> - reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
> - }
> -
> - lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
> -}
> -
> -static void
> -lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct rte_eth_rss_conf rss_conf;
> -
> - switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
> - case RTE_ETH_MQ_RX_RSS:
> - lio_dev_rss_configure(eth_dev);
> - break;
> - case RTE_ETH_MQ_RX_NONE:
> - /* if mq_mode is none, disable rss mode. */
> - default:
> - memset(&rss_conf, 0, sizeof(rss_conf));
> - rss_state->hash_disable = 1;
> - lio_dev_rss_hash_update(eth_dev, &rss_conf);
> - }
> -}
> -
> -/**
> - * Setup our receive queue/ringbuffer. This is the
> - * queue the Octeon uses to send us packets and
> - * responses. We are given a memory pool for our
> - * packet buffers that are used to populate the receive
> - * queue.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - * @param q_no
> - * Queue number
> - * @param num_rx_descs
> - * Number of entries in the queue
> - * @param socket_id
> - * Where to allocate memory
> - * @param rx_conf
> - * Pointer to the struction rte_eth_rxconf
> - * @param mp
> - * Pointer to the packet pool
> - *
> - * @return
> - * - On success, return 0
> - * - On failure, return -1
> - */
> -static int
> -lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> - uint16_t num_rx_descs, unsigned int socket_id,
> - const struct rte_eth_rxconf *rx_conf __rte_unused,
> - struct rte_mempool *mp)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct rte_pktmbuf_pool_private *mbp_priv;
> - uint32_t fw_mapped_oq;
> - uint16_t buf_size;
> -
> - if (q_no >= lio_dev->nb_rx_queues) {
> - lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
> - return -EINVAL;
> - }
> -
> - lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
> -
> - fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
> -
> - /* Free previous allocation if any */
> - if (eth_dev->data->rx_queues[q_no] != NULL) {
> - lio_dev_rx_queue_release(eth_dev, q_no);
> - eth_dev->data->rx_queues[q_no] = NULL;
> - }
> -
> - mbp_priv = rte_mempool_get_priv(mp);
> - buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
> -
> - if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
> - socket_id)) {
> - lio_dev_err(lio_dev, "droq allocation failed\n");
> - return -1;
> - }
> -
> - eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
> -
> - return 0;
> -}
> -
> -/**
> - * Release the receive queue/ringbuffer. Called by
> - * the upper layers.
> - *
> - * @param eth_dev
> - * Pointer to Ethernet device structure.
> - * @param q_no
> - * Receive queue index.
> - *
> - * @return
> - * - nothing
> - */
> -void
> -lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
> -{
> - struct lio_droq *droq = dev->data->rx_queues[q_no];
> - int oq_no;
> -
> - if (droq) {
> - oq_no = droq->q_no;
> - lio_delete_droq_queue(droq->lio_dev, oq_no);
> - }
> -}
> -
> -/**
> - * Allocate and initialize SW ring. Initialize associated HW registers.
> - *
> - * @param eth_dev
> - * Pointer to structure rte_eth_dev
> - *
> - * @param q_no
> - * Queue number
> - *
> - * @param num_tx_descs
> - * Number of ringbuffer descriptors
> - *
> - * @param socket_id
> - * NUMA socket id, used for memory allocations
> - *
> - * @param tx_conf
> - * Pointer to the structure rte_eth_txconf
> - *
> - * @return
> - * - On success, return 0
> - * - On failure, return -errno value
> - */
> -static int
> -lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> - uint16_t num_tx_descs, unsigned int socket_id,
> - const struct rte_eth_txconf *tx_conf __rte_unused)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
> - int retval;
> -
> - if (q_no >= lio_dev->nb_tx_queues) {
> - lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
> - return -EINVAL;
> - }
> -
> - lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
> -
> - /* Free previous allocation if any */
> - if (eth_dev->data->tx_queues[q_no] != NULL) {
> - lio_dev_tx_queue_release(eth_dev, q_no);
> - eth_dev->data->tx_queues[q_no] = NULL;
> - }
> -
> - retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
> - num_tx_descs, lio_dev, socket_id);
> -
> - if (retval) {
> - lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
> - return retval;
> - }
> -
> - retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
> - lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
> - socket_id);
> -
> - if (retval) {
> - lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
> - return retval;
> - }
> -
> - eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
> -
> - return 0;
> -}
> -
> -/**
> - * Release the transmit queue/ringbuffer. Called by
> - * the upper layers.
> - *
> - * @param eth_dev
> - * Pointer to Ethernet device structure.
> - * @param q_no
> - * Transmit queue index.
> - *
> - * @return
> - * - nothing
> - */
> -void
> -lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
> -{
> - struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
> - uint32_t fw_mapped_iq_no;
> -
> -
> - if (tq) {
> - /* Free sg_list */
> - lio_delete_sglist(tq);
> -
> - fw_mapped_iq_no = tq->txpciq.s.q_no;
> - lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
> - }
> -}
> -
> -/**
> - * Api to check link state.
> - */
> -static void
> -lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - struct lio_link_status_resp *resp;
> - union octeon_link_status *ls;
> - struct lio_soft_command *sc;
> - uint32_t resp_size;
> -
> - if (!lio_dev->intf_open)
> - return;
> -
> - resp_size = sizeof(struct lio_link_status_resp);
> - sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> - if (sc == NULL)
> - return;
> -
> - resp = (struct lio_link_status_resp *)sc->virtrptr;
> - lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> - LIO_OPCODE_INFO, 0, 0, 0);
> -
> - /* Setting wait time in seconds */
> - sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> - if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
> - goto get_status_fail;
> -
> - while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> - rte_delay_ms(1);
> - }
> -
> - if (resp->status)
> - goto get_status_fail;
> -
> - ls = &resp->link_info.link;
> -
> - lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
> -
> - if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
> - if (ls->s.mtu < eth_dev->data->mtu) {
> - lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
> - ls->s.mtu);
> - eth_dev->data->mtu = ls->s.mtu;
> - }
> - lio_dev->linfo.link.link_status64 = ls->link_status64;
> - lio_dev_link_update(eth_dev, 0);
> - }
> -
> - lio_free_soft_command(sc);
> -
> - return;
> -
> -get_status_fail:
> - lio_free_soft_command(sc);
> -}
> -
> -/* This function will be invoked every LSC_TIMEOUT ns (100ms)
> - * and will update link state if it changes.
> - */
> -static void
> -lio_sync_link_state_check(void *eth_dev)
> -{
> - struct lio_device *lio_dev =
> - (((struct rte_eth_dev *)eth_dev)->data->dev_private);
> -
> - if (lio_dev->port_configured)
> - lio_dev_get_link_status(eth_dev);
> -
> - /* Schedule periodic link status check.
> - * Stop check if interface is close and start again while opening.
> - */
> - if (lio_dev->intf_open)
> - rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
> - eth_dev);
> -}
> -
> -static int
> -lio_dev_start(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - int ret = 0;
> -
> - lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
> -
> - if (lio_dev->fn_list.enable_io_queues(lio_dev))
> - return -1;
> -
> - if (lio_send_rx_ctrl_cmd(eth_dev, 1))
> - return -1;
> -
> - /* Ready for link status updates */
> - lio_dev->intf_open = 1;
> - rte_mb();
> -
> - /* Configure RSS if device configured with multiple RX queues. */
> - lio_dev_mq_rx_configure(eth_dev);
> -
> - /* Before update the link info,
> - * must set linfo.link.link_status64 to 0.
> - */
> - lio_dev->linfo.link.link_status64 = 0;
> -
> - /* start polling for lsc */
> - ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
> - lio_sync_link_state_check,
> - eth_dev);
> - if (ret) {
> - lio_dev_err(lio_dev,
> - "link state check handler creation failed\n");
> - goto dev_lsc_handle_error;
> - }
> -
> - while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
> - rte_delay_ms(1);
> -
> - if (lio_dev->linfo.link.link_status64 == 0) {
> - ret = -1;
> - goto dev_mtu_set_error;
> - }
> -
> - ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> - if (ret != 0)
> - goto dev_mtu_set_error;
> -
> - return 0;
> -
> -dev_mtu_set_error:
> - rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
> -
> -dev_lsc_handle_error:
> - lio_dev->intf_open = 0;
> - lio_send_rx_ctrl_cmd(eth_dev, 0);
> -
> - return ret;
> -}
> -
> -/* Stop device and disable input/output functions */
> -static int
> -lio_dev_stop(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
> - eth_dev->data->dev_started = 0;
> - lio_dev->intf_open = 0;
> - rte_mb();
> -
> - /* Cancel callback if still running. */
> - rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
> -
> - lio_send_rx_ctrl_cmd(eth_dev, 0);
> -
> - lio_wait_for_instr_fetch(lio_dev);
> -
> - /* Clear recorded link status */
> - lio_dev->linfo.link.link_status64 = 0;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
> - return 0;
> - }
> -
> - if (lio_dev->linfo.link.s.link_up) {
> - lio_dev_info(lio_dev, "Link is already UP\n");
> - return 0;
> - }
> -
> - if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
> - lio_dev_err(lio_dev, "Unable to set Link UP\n");
> - return -1;
> - }
> -
> - lio_dev->linfo.link.s.link_up = 1;
> - eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
> - return 0;
> - }
> -
> - if (!lio_dev->linfo.link.s.link_up) {
> - lio_dev_info(lio_dev, "Link is already DOWN\n");
> - return 0;
> - }
> -
> - lio_dev->linfo.link.s.link_up = 0;
> - eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
> -
> - if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
> - lio_dev->linfo.link.s.link_up = 1;
> - eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
> - lio_dev_err(lio_dev, "Unable to set Link Down\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> - * Reset and stop the device. This occurs on the first
> - * call to this routine. Subsequent calls will simply
> - * return. NB: This will require the NIC to be rebooted.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - *
> - * @return
> - * - nothing
> - */
> -static int
> -lio_dev_close(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - int ret = 0;
> -
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> - return 0;
> -
> - lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
> -
> - if (lio_dev->intf_open)
> - ret = lio_dev_stop(eth_dev);
> -
> - /* Reset ioq regs */
> - lio_dev->fn_list.setup_device_regs(lio_dev);
> -
> - if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
> - cn23xx_vf_ask_pf_to_do_flr(lio_dev);
> - rte_delay_ms(LIO_PCI_FLR_WAIT);
> - }
> -
> - /* lio_free_mbox */
> - lio_dev->fn_list.free_mbox(lio_dev);
> -
> - /* Free glist resources */
> - rte_free(lio_dev->glist_head);
> - rte_free(lio_dev->glist_lock);
> - lio_dev->glist_head = NULL;
> - lio_dev->glist_lock = NULL;
> -
> - lio_dev->port_configured = 0;
> -
> - /* Delete all queues */
> - lio_dev_clear_queues(eth_dev);
> -
> - return ret;
> -}
> -
> -/**
> - * Enable tunnel rx checksum verification from firmware.
> - */
> -static void
> -lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
> - ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
> - return;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
> - lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
> -}
> -
> -/**
> - * Enable checksum calculation for inner packet in a tunnel.
> - */
> -static void
> -lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
> - ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
> - return;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
> - lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
> -}
> -
> -static int
> -lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
> - int num_rxq)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
> - lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> - LIO_Q_RECONF_MIN_VERSION);
> - return -ENOTSUP;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
> - ctrl_pkt.ncmd.s.param1 = num_txq;
> - ctrl_pkt.ncmd.s.param2 = num_rxq;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send queue count control command\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Queue count control command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - int ret;
> -
> - if (lio_dev->nb_rx_queues != num_rxq ||
> - lio_dev->nb_tx_queues != num_txq) {
> - if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
> - return -1;
> - lio_dev->nb_rx_queues = num_rxq;
> - lio_dev->nb_tx_queues = num_txq;
> - }
> -
> - if (lio_dev->intf_open) {
> - ret = lio_dev_stop(eth_dev);
> - if (ret != 0)
> - return ret;
> - }
> -
> - /* Reset ioq registers */
> - if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
> - lio_dev_err(lio_dev, "Failed to configure device registers\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_configure(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - int retval, num_iqueues, num_oqueues;
> - uint8_t mac[RTE_ETHER_ADDR_LEN], i;
> - struct lio_if_cfg_resp *resp;
> - struct lio_soft_command *sc;
> - union lio_if_cfg if_cfg;
> - uint32_t resp_size;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - RTE_ETH_RX_OFFLOAD_RSS_HASH;
> -
> - /* Inform firmware about change in number of queues to use.
> - * Disable IO queues and reset registers for re-configuration.
> - */
> - if (lio_dev->port_configured)
> - return lio_reconf_queues(eth_dev,
> - eth_dev->data->nb_tx_queues,
> - eth_dev->data->nb_rx_queues);
> -
> - lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
> - lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
> -
> - /* Set max number of queues which can be re-configured. */
> - lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
> - lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
> -
> - resp_size = sizeof(struct lio_if_cfg_resp);
> - sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> - if (sc == NULL)
> - return -ENOMEM;
> -
> - resp = (struct lio_if_cfg_resp *)sc->virtrptr;
> -
> - /* Firmware doesn't have capability to reconfigure the queues,
> - * Claim all queues, and use as many required
> - */
> - if_cfg.if_cfg64 = 0;
> - if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
> - if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
> - if_cfg.s.base_queue = 0;
> -
> - if_cfg.s.gmx_port_id = lio_dev->pf_num;
> -
> - lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> - LIO_OPCODE_IF_CFG, 0,
> - if_cfg.if_cfg64, 0);
> -
> - /* Setting wait time in seconds */
> - sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> - retval = lio_send_soft_command(lio_dev, sc);
> - if (retval == LIO_IQ_SEND_FAILED) {
> - lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
> - retval);
> - /* Soft instr is freed by driver in case of failure. */
> - goto nic_config_fail;
> - }
> -
> - /* Sleep on a wait queue till the cond flag indicates that the
> - * response arrived or timed-out.
> - */
> - while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> - lio_process_ordered_list(lio_dev);
> - rte_delay_ms(1);
> - }
> -
> - retval = resp->status;
> - if (retval) {
> - lio_dev_err(lio_dev, "iq/oq config failed\n");
> - goto nic_config_fail;
> - }
> -
> - strlcpy(lio_dev->firmware_version,
> - resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
> -
> - lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
> - sizeof(struct octeon_if_cfg_info) >> 3);
> -
> - num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
> - num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
> -
> - if (!(num_iqueues) || !(num_oqueues)) {
> - lio_dev_err(lio_dev,
> - "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
> - (unsigned long)resp->cfg_info.iqmask,
> - (unsigned long)resp->cfg_info.oqmask);
> - goto nic_config_fail;
> - }
> -
> - lio_dev_dbg(lio_dev,
> - "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
> - eth_dev->data->port_id,
> - (unsigned long)resp->cfg_info.iqmask,
> - (unsigned long)resp->cfg_info.oqmask,
> - num_iqueues, num_oqueues);
> -
> - lio_dev->linfo.num_rxpciq = num_oqueues;
> - lio_dev->linfo.num_txpciq = num_iqueues;
> -
> - for (i = 0; i < num_oqueues; i++) {
> - lio_dev->linfo.rxpciq[i].rxpciq64 =
> - resp->cfg_info.linfo.rxpciq[i].rxpciq64;
> - lio_dev_dbg(lio_dev, "index %d OQ %d\n",
> - i, lio_dev->linfo.rxpciq[i].s.q_no);
> - }
> -
> - for (i = 0; i < num_iqueues; i++) {
> - lio_dev->linfo.txpciq[i].txpciq64 =
> - resp->cfg_info.linfo.txpciq[i].txpciq64;
> - lio_dev_dbg(lio_dev, "index %d IQ %d\n",
> - i, lio_dev->linfo.txpciq[i].s.q_no);
> - }
> -
> - lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
> - lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
> - lio_dev->linfo.link.link_status64 =
> - resp->cfg_info.linfo.link.link_status64;
> -
> - /* 64-bit swap required on LE machines */
> - lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
> - for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
> - mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
> - 2 + i));
> -
> - /* Copy the permanent MAC address */
> - rte_ether_addr_copy((struct rte_ether_addr *)mac,
> - ð_dev->data->mac_addrs[0]);
> -
> - /* enable firmware checksum support for tunnel packets */
> - lio_enable_hw_tunnel_rx_checksum(eth_dev);
> - lio_enable_hw_tunnel_tx_checksum(eth_dev);
> -
> - lio_dev->glist_lock =
> - rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
> - if (lio_dev->glist_lock == NULL)
> - return -ENOMEM;
> -
> - lio_dev->glist_head =
> - rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
> - 0);
> - if (lio_dev->glist_head == NULL) {
> - rte_free(lio_dev->glist_lock);
> - lio_dev->glist_lock = NULL;
> - return -ENOMEM;
> - }
> -
> - lio_dev_link_update(eth_dev, 0);
> -
> - lio_dev->port_configured = 1;
> -
> - lio_free_soft_command(sc);
> -
> - /* Reset ioq regs */
> - lio_dev->fn_list.setup_device_regs(lio_dev);
> -
> - /* Free iq_0 used during init */
> - lio_free_instr_queue0(lio_dev);
> -
> - return 0;
> -
> -nic_config_fail:
> - lio_dev_err(lio_dev, "Failed retval %d\n", retval);
> - lio_free_soft_command(sc);
> - lio_free_instr_queue0(lio_dev);
> -
> - return -ENODEV;
> -}
> -
> -/* Define our ethernet definitions */
> -static const struct eth_dev_ops liovf_eth_dev_ops = {
> - .dev_configure = lio_dev_configure,
> - .dev_start = lio_dev_start,
> - .dev_stop = lio_dev_stop,
> - .dev_set_link_up = lio_dev_set_link_up,
> - .dev_set_link_down = lio_dev_set_link_down,
> - .dev_close = lio_dev_close,
> - .promiscuous_enable = lio_dev_promiscuous_enable,
> - .promiscuous_disable = lio_dev_promiscuous_disable,
> - .allmulticast_enable = lio_dev_allmulticast_enable,
> - .allmulticast_disable = lio_dev_allmulticast_disable,
> - .link_update = lio_dev_link_update,
> - .stats_get = lio_dev_stats_get,
> - .xstats_get = lio_dev_xstats_get,
> - .xstats_get_names = lio_dev_xstats_get_names,
> - .stats_reset = lio_dev_stats_reset,
> - .xstats_reset = lio_dev_xstats_reset,
> - .dev_infos_get = lio_dev_info_get,
> - .vlan_filter_set = lio_dev_vlan_filter_set,
> - .rx_queue_setup = lio_dev_rx_queue_setup,
> - .rx_queue_release = lio_dev_rx_queue_release,
> - .tx_queue_setup = lio_dev_tx_queue_setup,
> - .tx_queue_release = lio_dev_tx_queue_release,
> - .reta_update = lio_dev_rss_reta_update,
> - .reta_query = lio_dev_rss_reta_query,
> - .rss_hash_conf_get = lio_dev_rss_hash_conf_get,
> - .rss_hash_update = lio_dev_rss_hash_update,
> - .udp_tunnel_port_add = lio_dev_udp_tunnel_add,
> - .udp_tunnel_port_del = lio_dev_udp_tunnel_del,
> - .mtu_set = lio_dev_mtu_set,
> -};
> -
> -static void
> -lio_check_pf_hs_response(void *lio_dev)
> -{
> - struct lio_device *dev = lio_dev;
> -
> - /* check till response arrives */
> - if (dev->pfvf_hsword.coproc_tics_per_us)
> - return;
> -
> - cn23xx_vf_handle_mbox(dev);
> -
> - rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
> -}
> -
> -/**
> - * \brief Identify the LIO device and to map the BAR address space
> - * @param lio_dev lio device
> - */
> -static int
> -lio_chip_specific_setup(struct lio_device *lio_dev)
> -{
> - struct rte_pci_device *pdev = lio_dev->pci_dev;
> - uint32_t dev_id = pdev->id.device_id;
> - const char *s;
> - int ret = 1;
> -
> - switch (dev_id) {
> - case LIO_CN23XX_VF_VID:
> - lio_dev->chip_id = LIO_CN23XX_VF_VID;
> - ret = cn23xx_vf_setup_device(lio_dev);
> - s = "CN23XX VF";
> - break;
> - default:
> - s = "?";
> - lio_dev_err(lio_dev, "Unsupported Chip\n");
> - }
> -
> - if (!ret)
> - lio_dev_info(lio_dev, "DEVICE : %s\n", s);
> -
> - return ret;
> -}
> -
> -static int
> -lio_first_time_init(struct lio_device *lio_dev,
> - struct rte_pci_device *pdev)
> -{
> - int dpdk_queues;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* set dpdk specific pci device pointer */
> - lio_dev->pci_dev = pdev;
> -
> - /* Identify the LIO type and set device ops */
> - if (lio_chip_specific_setup(lio_dev)) {
> - lio_dev_err(lio_dev, "Chip specific setup failed\n");
> - return -1;
> - }
> -
> - /* Initialize soft command buffer pool */
> - if (lio_setup_sc_buffer_pool(lio_dev)) {
> - lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
> - return -1;
> - }
> -
> - /* Initialize lists to manage the requests of different types that
> - * arrive from applications for this lio device.
> - */
> - lio_setup_response_list(lio_dev);
> -
> - if (lio_dev->fn_list.setup_mbox(lio_dev)) {
> - lio_dev_err(lio_dev, "Mailbox setup failed\n");
> - goto error;
> - }
> -
> - /* Check PF response */
> - lio_check_pf_hs_response((void *)lio_dev);
> -
> - /* Do handshake and exit if incompatible PF driver */
> - if (cn23xx_pfvf_handshake(lio_dev))
> - goto error;
> -
> - /* Request and wait for device reset. */
> - if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
> - cn23xx_vf_ask_pf_to_do_flr(lio_dev);
> - /* FLR wait time doubled as a precaution. */
> - rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
> - }
> -
> - if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
> - lio_dev_err(lio_dev, "Failed to configure device registers\n");
> - goto error;
> - }
> -
> - if (lio_setup_instr_queue0(lio_dev)) {
> - lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
> - goto error;
> - }
> -
> - dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
> -
> - lio_dev->max_tx_queues = dpdk_queues;
> - lio_dev->max_rx_queues = dpdk_queues;
> -
> - /* Enable input and output queues for this device */
> - if (lio_dev->fn_list.enable_io_queues(lio_dev))
> - goto error;
> -
> - return 0;
> -
> -error:
> - lio_free_sc_buffer_pool(lio_dev);
> - if (lio_dev->mbox[0])
> - lio_dev->fn_list.free_mbox(lio_dev);
> - if (lio_dev->instr_queue[0])
> - lio_free_instr_queue0(lio_dev);
> -
> - return -1;
> -}
> -
> -static int
> -lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> - return 0;
> -
> - /* lio_free_sc_buffer_pool */
> - lio_free_sc_buffer_pool(lio_dev);
> -
> - return 0;
> -}
> -
> -static int
> -lio_eth_dev_init(struct rte_eth_dev *eth_dev)
> -{
> - struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
> - eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
> -
> - /* Primary does the initialization. */
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> - return 0;
> -
> - rte_eth_copy_pci_info(eth_dev, pdev);
> -
> - if (pdev->mem_resource[0].addr) {
> - lio_dev->hw_addr = pdev->mem_resource[0].addr;
> - } else {
> - PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
> - return -ENODEV;
> - }
> -
> - lio_dev->eth_dev = eth_dev;
> - /* set lio device print string */
> - snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
> - "%s[%02x:%02x.%x]", pdev->driver->driver.name,
> - pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
> -
> - lio_dev->port_id = eth_dev->data->port_id;
> -
> - if (lio_first_time_init(lio_dev, pdev)) {
> - lio_dev_err(lio_dev, "Device init failed\n");
> - return -EINVAL;
> - }
> -
> - eth_dev->dev_ops = &liovf_eth_dev_ops;
> - eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
> - if (eth_dev->data->mac_addrs == NULL) {
> - lio_dev_err(lio_dev,
> - "MAC addresses memory allocation failed\n");
> - eth_dev->dev_ops = NULL;
> - eth_dev->rx_pkt_burst = NULL;
> - eth_dev->tx_pkt_burst = NULL;
> - return -ENOMEM;
> - }
> -
> - rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
> - rte_wmb();
> -
> - lio_dev->port_configured = 0;
> - /* Always allow unicast packets */
> - lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
> -
> - return 0;
> -}
> -
> -static int
> -lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> - struct rte_pci_device *pci_dev)
> -{
> - return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
> - lio_eth_dev_init);
> -}
> -
> -static int
> -lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
> -{
> - return rte_eth_dev_pci_generic_remove(pci_dev,
> - lio_eth_dev_uninit);
> -}
> -
> -/* Set of PCI devices this driver supports */
> -static const struct rte_pci_id pci_id_liovf_map[] = {
> - { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
> - { .vendor_id = 0, /* sentinel */ }
> -};
> -
> -static struct rte_pci_driver rte_liovf_pmd = {
> - .id_table = pci_id_liovf_map,
> - .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> - .probe = lio_eth_dev_pci_probe,
> - .remove = lio_eth_dev_pci_remove,
> -};
> -
> -RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
> -RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
> -RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
> -RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
> -RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
> diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
> deleted file mode 100644
> index ece2b03858..0000000000
> --- a/drivers/net/liquidio/lio_ethdev.h
> +++ /dev/null
> @@ -1,179 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_ETHDEV_H_
> -#define _LIO_ETHDEV_H_
> -
> -#include <stdint.h>
> -
> -#include "lio_struct.h"
> -
> -/* timeout to check link state updates from firmware in us */
> -#define LIO_LSC_TIMEOUT 100000 /* 100000us (100ms) */
> -#define LIO_MAX_CMD_TIMEOUT 10000 /* 10000ms (10s) */
> -
> -/* The max frame size with default MTU */
> -#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
> -
> -#define LIO_DEV(_eth_dev) ((_eth_dev)->data->dev_private)
> -
> -/* LIO Response condition variable */
> -struct lio_dev_ctrl_cmd {
> - struct rte_eth_dev *eth_dev;
> - uint64_t cond;
> -};
> -
> -enum lio_bus_speed {
> - LIO_LINK_SPEED_UNKNOWN = 0,
> - LIO_LINK_SPEED_10000 = 10000,
> - LIO_LINK_SPEED_25000 = 25000
> -};
> -
> -struct octeon_if_cfg_info {
> - uint64_t iqmask; /** mask for IQs enabled for the port */
> - uint64_t oqmask; /** mask for OQs enabled for the port */
> - struct octeon_link_info linfo; /** initial link information */
> - char lio_firmware_version[LIO_FW_VERSION_LENGTH];
> -};
> -
> -/** Stats for each NIC port in RX direction. */
> -struct octeon_rx_stats {
> - /* link-level stats */
> - uint64_t total_rcvd;
> - uint64_t bytes_rcvd;
> - uint64_t total_bcst;
> - uint64_t total_mcst;
> - uint64_t runts;
> - uint64_t ctl_rcvd;
> - uint64_t fifo_err; /* Accounts for over/under-run of buffers */
> - uint64_t dmac_drop;
> - uint64_t fcs_err;
> - uint64_t jabber_err;
> - uint64_t l2_err;
> - uint64_t frame_err;
> -
> - /* firmware stats */
> - uint64_t fw_total_rcvd;
> - uint64_t fw_total_fwd;
> - uint64_t fw_total_fwd_bytes;
> - uint64_t fw_err_pko;
> - uint64_t fw_err_link;
> - uint64_t fw_err_drop;
> - uint64_t fw_rx_vxlan;
> - uint64_t fw_rx_vxlan_err;
> -
> - /* LRO */
> - uint64_t fw_lro_pkts; /* Number of packets that are LROed */
> - uint64_t fw_lro_octs; /* Number of octets that are LROed */
> - uint64_t fw_total_lro; /* Number of LRO packets formed */
> - uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
> - uint64_t fw_lro_aborts_port;
> - uint64_t fw_lro_aborts_seq;
> - uint64_t fw_lro_aborts_tsval;
> - uint64_t fw_lro_aborts_timer;
> - /* intrmod: packet forward rate */
> - uint64_t fwd_rate;
> -};
> -
> -/** Stats for each NIC port in RX direction. */
> -struct octeon_tx_stats {
> - /* link-level stats */
> - uint64_t total_pkts_sent;
> - uint64_t total_bytes_sent;
> - uint64_t mcast_pkts_sent;
> - uint64_t bcast_pkts_sent;
> - uint64_t ctl_sent;
> - uint64_t one_collision_sent; /* Packets sent after one collision */
> - /* Packets sent after multiple collision */
> - uint64_t multi_collision_sent;
> - /* Packets not sent due to max collisions */
> - uint64_t max_collision_fail;
> - /* Packets not sent due to max deferrals */
> - uint64_t max_deferral_fail;
> - /* Accounts for over/under-run of buffers */
> - uint64_t fifo_err;
> - uint64_t runts;
> - uint64_t total_collisions; /* Total number of collisions detected */
> -
> - /* firmware stats */
> - uint64_t fw_total_sent;
> - uint64_t fw_total_fwd;
> - uint64_t fw_total_fwd_bytes;
> - uint64_t fw_err_pko;
> - uint64_t fw_err_link;
> - uint64_t fw_err_drop;
> - uint64_t fw_err_tso;
> - uint64_t fw_tso; /* number of tso requests */
> - uint64_t fw_tso_fwd; /* number of packets segmented in tso */
> - uint64_t fw_tx_vxlan;
> -};
> -
> -struct octeon_link_stats {
> - struct octeon_rx_stats fromwire;
> - struct octeon_tx_stats fromhost;
> -};
> -
> -union lio_if_cfg {
> - uint64_t if_cfg64;
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t base_queue : 16;
> - uint64_t num_iqueues : 16;
> - uint64_t num_oqueues : 16;
> - uint64_t gmx_port_id : 8;
> - uint64_t vf_id : 8;
> -#else
> - uint64_t vf_id : 8;
> - uint64_t gmx_port_id : 8;
> - uint64_t num_oqueues : 16;
> - uint64_t num_iqueues : 16;
> - uint64_t base_queue : 16;
> -#endif
> - } s;
> -};
> -
> -struct lio_if_cfg_resp {
> - uint64_t rh;
> - struct octeon_if_cfg_info cfg_info;
> - uint64_t status;
> -};
> -
> -struct lio_link_stats_resp {
> - uint64_t rh;
> - struct octeon_link_stats link_stats;
> - uint64_t status;
> -};
> -
> -struct lio_link_status_resp {
> - uint64_t rh;
> - struct octeon_link_info link_info;
> - uint64_t status;
> -};
> -
> -struct lio_rss_set {
> - struct param {
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - uint64_t flags : 16;
> - uint64_t hashinfo : 32;
> - uint64_t itablesize : 16;
> - uint64_t hashkeysize : 16;
> - uint64_t reserved : 48;
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t itablesize : 16;
> - uint64_t hashinfo : 32;
> - uint64_t flags : 16;
> - uint64_t reserved : 48;
> - uint64_t hashkeysize : 16;
> -#endif
> - } param;
> -
> - uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
> - uint8_t key[LIO_RSS_MAX_KEY_SZ];
> -};
> -
> -void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
> -
> -void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
> -
> -#endif /* _LIO_ETHDEV_H_ */
> diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
> deleted file mode 100644
> index f227827081..0000000000
> --- a/drivers/net/liquidio/lio_logs.h
> +++ /dev/null
> @@ -1,58 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_LOGS_H_
> -#define _LIO_LOGS_H_
> -
> -extern int lio_logtype_driver;
> -#define lio_dev_printf(lio_dev, level, fmt, args...) \
> - rte_log(RTE_LOG_ ## level, lio_logtype_driver, \
> - "%s" fmt, (lio_dev)->dev_string, ##args)
> -
> -#define lio_dev_info(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
> -
> -#define lio_dev_err(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
> -
> -extern int lio_logtype_init;
> -#define PMD_INIT_LOG(level, fmt, args...) \
> - rte_log(RTE_LOG_ ## level, lio_logtype_init, \
> - fmt, ## args)
> -
> -/* Enable these through config options */
> -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
> -
> -#define lio_dev_dbg(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_RX
> -#define PMD_RX_LOG(lio_dev, level, fmt, args...) \
> - lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
> -#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_TX
> -#define PMD_TX_LOG(lio_dev, level, fmt, args...) \
> - lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
> -#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
> -#define PMD_MBOX_LOG(lio_dev, level, fmt, args...) \
> - lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
> -#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
> -#define PMD_REGS_LOG(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
> -#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
> -
> -#endif /* _LIO_LOGS_H_ */
> diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
> deleted file mode 100644
> index e09798ddd7..0000000000
> --- a/drivers/net/liquidio/lio_rxtx.c
> +++ /dev/null
> @@ -1,1804 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -
> -#include "lio_logs.h"
> -#include "lio_struct.h"
> -#include "lio_ethdev.h"
> -#include "lio_rxtx.h"
> -
> -#define LIO_MAX_SG 12
> -/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
> -#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
> -#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
> -
> -static void
> -lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
> -{
> - uint32_t count = 0;
> -
> - do {
> - count += droq->buffer_size;
> - } while (count < LIO_MAX_RX_PKTLEN);
> -}
> -
> -static void
> -lio_droq_reset_indices(struct lio_droq *droq)
> -{
> - droq->read_idx = 0;
> - droq->write_idx = 0;
> - droq->refill_idx = 0;
> - droq->refill_count = 0;
> - rte_atomic64_set(&droq->pkts_pending, 0);
> -}
> -
> -static void
> -lio_droq_destroy_ring_buffers(struct lio_droq *droq)
> -{
> - uint32_t i;
> -
> - for (i = 0; i < droq->nb_desc; i++) {
> - if (droq->recv_buf_list[i].buffer) {
> - rte_pktmbuf_free((struct rte_mbuf *)
> - droq->recv_buf_list[i].buffer);
> - droq->recv_buf_list[i].buffer = NULL;
> - }
> - }
> -
> - lio_droq_reset_indices(droq);
> -}
> -
> -static int
> -lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
> - struct lio_droq *droq)
> -{
> - struct lio_droq_desc *desc_ring = droq->desc_ring;
> - uint32_t i;
> - void *buf;
> -
> - for (i = 0; i < droq->nb_desc; i++) {
> - buf = rte_pktmbuf_alloc(droq->mpool);
> - if (buf == NULL) {
> - lio_dev_err(lio_dev, "buffer alloc failed\n");
> - droq->stats.rx_alloc_failure++;
> - lio_droq_destroy_ring_buffers(droq);
> - return -ENOMEM;
> - }
> -
> - droq->recv_buf_list[i].buffer = buf;
> - droq->info_list[i].length = 0;
> -
> - /* map ring buffers into memory */
> - desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
> - desc_ring[i].buffer_ptr =
> - lio_map_ring(droq->recv_buf_list[i].buffer);
> - }
> -
> - lio_droq_reset_indices(droq);
> -
> - lio_droq_compute_max_packet_bufs(droq);
> -
> - return 0;
> -}
> -
> -static void
> -lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
> -{
> - const struct rte_memzone *mz_tmp;
> - int ret = 0;
> -
> - if (mz == NULL) {
> - lio_dev_err(lio_dev, "Memzone NULL\n");
> - return;
> - }
> -
> - mz_tmp = rte_memzone_lookup(mz->name);
> - if (mz_tmp == NULL) {
> - lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
> - return;
> - }
> -
> - ret = rte_memzone_free(mz);
> - if (ret)
> - lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
> -}
> -
> -/**
> - * Frees the space for descriptor ring for the droq.
> - *
> - * @param lio_dev - pointer to the lio device structure
> - * @param q_no - droq no.
> - */
> -static void
> -lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
> -{
> - struct lio_droq *droq = lio_dev->droq[q_no];
> -
> - lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
> -
> - lio_droq_destroy_ring_buffers(droq);
> - rte_free(droq->recv_buf_list);
> - droq->recv_buf_list = NULL;
> - lio_dma_zone_free(lio_dev, droq->info_mz);
> - lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
> -
> - memset(droq, 0, LIO_DROQ_SIZE);
> -}
> -
> -static void *
> -lio_alloc_info_buffer(struct lio_device *lio_dev,
> - struct lio_droq *droq, unsigned int socket_id)
> -{
> - droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> - "info_list", droq->q_no,
> - (droq->nb_desc *
> - LIO_DROQ_INFO_SIZE),
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> -
> - if (droq->info_mz == NULL)
> - return NULL;
> -
> - droq->info_list_dma = droq->info_mz->iova;
> - droq->info_alloc_size = droq->info_mz->len;
> - droq->info_base_addr = (size_t)droq->info_mz->addr;
> -
> - return droq->info_mz->addr;
> -}
> -
> -/**
> - * Allocates space for the descriptor ring for the droq and
> - * sets the base addr, num desc etc in Octeon registers.
> - *
> - * @param lio_dev - pointer to the lio device structure
> - * @param q_no - droq no.
> - * @param app_ctx - pointer to application context
> - * @return Success: 0 Failure: -1
> - */
> -static int
> -lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
> - uint32_t num_descs, uint32_t desc_size,
> - struct rte_mempool *mpool, unsigned int socket_id)
> -{
> - uint32_t c_refill_threshold;
> - uint32_t desc_ring_size;
> - struct lio_droq *droq;
> -
> - lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
> -
> - droq = lio_dev->droq[q_no];
> - droq->lio_dev = lio_dev;
> - droq->q_no = q_no;
> - droq->mpool = mpool;
> -
> - c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
> -
> - droq->nb_desc = num_descs;
> - droq->buffer_size = desc_size;
> -
> - desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
> - droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> - "droq", q_no,
> - desc_ring_size,
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> -
> - if (droq->desc_ring_mz == NULL) {
> - lio_dev_err(lio_dev,
> - "Output queue %d ring alloc failed\n", q_no);
> - return -1;
> - }
> -
> - droq->desc_ring_dma = droq->desc_ring_mz->iova;
> - droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
> -
> - lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
> - q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
> - lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
> - droq->nb_desc);
> -
> - droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
> - if (droq->info_list == NULL) {
> - lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
> - goto init_droq_fail;
> - }
> -
> - droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
> - (droq->nb_desc *
> - LIO_DROQ_RECVBUF_SIZE),
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (droq->recv_buf_list == NULL) {
> - lio_dev_err(lio_dev,
> - "Output queue recv buf list alloc failed\n");
> - goto init_droq_fail;
> - }
> -
> - if (lio_droq_setup_ring_buffers(lio_dev, droq))
> - goto init_droq_fail;
> -
> - droq->refill_threshold = c_refill_threshold;
> -
> - rte_spinlock_init(&droq->lock);
> -
> - lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
> -
> - lio_dev->io_qmask.oq |= (1ULL << q_no);
> -
> - return 0;
> -
> -init_droq_fail:
> - lio_delete_droq(lio_dev, q_no);
> -
> - return -1;
> -}
> -
> -int
> -lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
> - int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
> -{
> - struct lio_droq *droq;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* Allocate the DS for the new droq. */
> - droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
> - RTE_CACHE_LINE_SIZE, socket_id);
> - if (droq == NULL)
> - return -ENOMEM;
> -
> - lio_dev->droq[oq_no] = droq;
> -
> - /* Initialize the Droq */
> - if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
> - socket_id)) {
> - lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
> - rte_free(lio_dev->droq[oq_no]);
> - lio_dev->droq[oq_no] = NULL;
> - return -ENOMEM;
> - }
> -
> - lio_dev->num_oqs++;
> -
> - lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
> -
> - /* Send credit for octeon output queues. credits are always
> - * sent after the output queue is enabled.
> - */
> - rte_write32(lio_dev->droq[oq_no]->nb_desc,
> - lio_dev->droq[oq_no]->pkts_credit_reg);
> - rte_wmb();
> -
> - return 0;
> -}
> -
> -static inline uint32_t
> -lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
> -{
> - uint32_t buf_cnt = 0;
> -
> - while (total_len > (buf_size * buf_cnt))
> - buf_cnt++;
> -
> - return buf_cnt;
> -}
> -
> -/* If we were not able to refill all buffers, try to move around
> - * the buffers that were not dispatched.
> - */
> -static inline uint32_t
> -lio_droq_refill_pullup_descs(struct lio_droq *droq,
> - struct lio_droq_desc *desc_ring)
> -{
> - uint32_t refill_index = droq->refill_idx;
> - uint32_t desc_refilled = 0;
> -
> - while (refill_index != droq->read_idx) {
> - if (droq->recv_buf_list[refill_index].buffer) {
> - droq->recv_buf_list[droq->refill_idx].buffer =
> - droq->recv_buf_list[refill_index].buffer;
> - desc_ring[droq->refill_idx].buffer_ptr =
> - desc_ring[refill_index].buffer_ptr;
> - droq->recv_buf_list[refill_index].buffer = NULL;
> - desc_ring[refill_index].buffer_ptr = 0;
> - do {
> - droq->refill_idx = lio_incr_index(
> - droq->refill_idx, 1,
> - droq->nb_desc);
> - desc_refilled++;
> - droq->refill_count--;
> - } while (droq->recv_buf_list[droq->refill_idx].buffer);
> - }
> - refill_index = lio_incr_index(refill_index, 1,
> - droq->nb_desc);
> - } /* while */
> -
> - return desc_refilled;
> -}
> -
> -/* lio_droq_refill
> - *
> - * @param droq - droq in which descriptors require new buffers.
> - *
> - * Description:
> - * Called during normal DROQ processing in interrupt mode or by the poll
> - * thread to refill the descriptors from which buffers were dispatched
> - * to upper layers. Attempts to allocate new buffers. If that fails, moves
> - * up buffers (that were not dispatched) to form a contiguous ring.
> - *
> - * Returns:
> - * No of descriptors refilled.
> - *
> - * Locks:
> - * This routine is called with droq->lock held.
> - */
> -static uint32_t
> -lio_droq_refill(struct lio_droq *droq)
> -{
> - struct lio_droq_desc *desc_ring;
> - uint32_t desc_refilled = 0;
> - void *buf = NULL;
> -
> - desc_ring = droq->desc_ring;
> -
> - while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
> - /* If a valid buffer exists (happens if there is no dispatch),
> - * reuse the buffer, else allocate.
> - */
> - if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
> - buf = rte_pktmbuf_alloc(droq->mpool);
> - /* If a buffer could not be allocated, no point in
> - * continuing
> - */
> - if (buf == NULL) {
> - droq->stats.rx_alloc_failure++;
> - break;
> - }
> -
> - droq->recv_buf_list[droq->refill_idx].buffer = buf;
> - }
> -
> - desc_ring[droq->refill_idx].buffer_ptr =
> - lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
> - /* Reset any previous values in the length field. */
> - droq->info_list[droq->refill_idx].length = 0;
> -
> - droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
> - droq->nb_desc);
> - desc_refilled++;
> - droq->refill_count--;
> - }
> -
> - if (droq->refill_count)
> - desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
> -
> - /* if droq->refill_count
> - * The refill count would not change in pass two. We only moved buffers
> - * to close the gap in the ring, but we would still have the same no. of
> - * buffers to refill.
> - */
> - return desc_refilled;
> -}
> -
> -static int
> -lio_droq_fast_process_packet(struct lio_device *lio_dev,
> - struct lio_droq *droq,
> - struct rte_mbuf **rx_pkts)
> -{
> - struct rte_mbuf *nicbuf = NULL;
> - struct lio_droq_info *info;
> - uint32_t total_len = 0;
> - int data_total_len = 0;
> - uint32_t pkt_len = 0;
> - union octeon_rh *rh;
> - int data_pkts = 0;
> -
> - info = &droq->info_list[droq->read_idx];
> - lio_swap_8B_data((uint64_t *)info, 2);
> -
> - if (!info->length)
> - return -1;
> -
> - /* Len of resp hdr in included in the received data len. */
> - info->length -= OCTEON_RH_SIZE;
> - rh = &info->rh;
> -
> - total_len += (uint32_t)info->length;
> -
> - if (lio_opcode_slow_path(rh)) {
> - uint32_t buf_cnt;
> -
> - buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
> - (uint32_t)info->length);
> - droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
> - droq->nb_desc);
> - droq->refill_count += buf_cnt;
> - } else {
> - if (info->length <= droq->buffer_size) {
> - if (rh->r_dh.has_hash)
> - pkt_len = (uint32_t)(info->length - 8);
> - else
> - pkt_len = (uint32_t)info->length;
> -
> - nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
> - droq->recv_buf_list[droq->read_idx].buffer = NULL;
> - droq->read_idx = lio_incr_index(
> - droq->read_idx, 1,
> - droq->nb_desc);
> - droq->refill_count++;
> -
> - if (likely(nicbuf != NULL)) {
> - /* We don't have a way to pass flags yet */
> - nicbuf->ol_flags = 0;
> - if (rh->r_dh.has_hash) {
> - uint64_t *hash_ptr;
> -
> - nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
> - hash_ptr = rte_pktmbuf_mtod(nicbuf,
> - uint64_t *);
> - lio_swap_8B_data(hash_ptr, 1);
> - nicbuf->hash.rss = (uint32_t)*hash_ptr;
> - nicbuf->data_off += 8;
> - }
> -
> - nicbuf->pkt_len = pkt_len;
> - nicbuf->data_len = pkt_len;
> - nicbuf->port = lio_dev->port_id;
> - /* Store the mbuf */
> - rx_pkts[data_pkts++] = nicbuf;
> - data_total_len += pkt_len;
> - }
> -
> - /* Prefetch buffer pointers when on a cache line
> - * boundary
> - */
> - if ((droq->read_idx & 3) == 0) {
> - rte_prefetch0(
> - &droq->recv_buf_list[droq->read_idx]);
> - rte_prefetch0(
> - &droq->info_list[droq->read_idx]);
> - }
> - } else {
> - struct rte_mbuf *first_buf = NULL;
> - struct rte_mbuf *last_buf = NULL;
> -
> - while (pkt_len < info->length) {
> - int cpy_len = 0;
> -
> - cpy_len = ((pkt_len + droq->buffer_size) >
> - info->length)
> - ? ((uint32_t)info->length -
> - pkt_len)
> - : droq->buffer_size;
> -
> - nicbuf =
> - droq->recv_buf_list[droq->read_idx].buffer;
> - droq->recv_buf_list[droq->read_idx].buffer =
> - NULL;
> -
> - if (likely(nicbuf != NULL)) {
> - /* Note the first seg */
> - if (!pkt_len)
> - first_buf = nicbuf;
> -
> - nicbuf->port = lio_dev->port_id;
> - /* We don't have a way to pass
> - * flags yet
> - */
> - nicbuf->ol_flags = 0;
> - if ((!pkt_len) && (rh->r_dh.has_hash)) {
> - uint64_t *hash_ptr;
> -
> - nicbuf->ol_flags |=
> - RTE_MBUF_F_RX_RSS_HASH;
> - hash_ptr = rte_pktmbuf_mtod(
> - nicbuf, uint64_t *);
> - lio_swap_8B_data(hash_ptr, 1);
> - nicbuf->hash.rss =
> - (uint32_t)*hash_ptr;
> - nicbuf->data_off += 8;
> - nicbuf->pkt_len = cpy_len - 8;
> - nicbuf->data_len = cpy_len - 8;
> - } else {
> - nicbuf->pkt_len = cpy_len;
> - nicbuf->data_len = cpy_len;
> - }
> -
> - if (pkt_len)
> - first_buf->nb_segs++;
> -
> - if (last_buf)
> - last_buf->next = nicbuf;
> -
> - last_buf = nicbuf;
> - } else {
> - PMD_RX_LOG(lio_dev, ERR, "no buf\n");
> - }
> -
> - pkt_len += cpy_len;
> - droq->read_idx = lio_incr_index(
> - droq->read_idx,
> - 1, droq->nb_desc);
> - droq->refill_count++;
> -
> - /* Prefetch buffer pointers when on a
> - * cache line boundary
> - */
> - if ((droq->read_idx & 3) == 0) {
> - rte_prefetch0(&droq->recv_buf_list
> - [droq->read_idx]);
> -
> - rte_prefetch0(
> - &droq->info_list[droq->read_idx]);
> - }
> - }
> - rx_pkts[data_pkts++] = first_buf;
> - if (rh->r_dh.has_hash)
> - data_total_len += (pkt_len - 8);
> - else
> - data_total_len += pkt_len;
> - }
> -
> - /* Inform upper layer about packet checksum verification */
> - struct rte_mbuf *m = rx_pkts[data_pkts - 1];
> -
> - if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
> - m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
> -
> - if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
> - m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
> - }
> -
> - if (droq->refill_count >= droq->refill_threshold) {
> - int desc_refilled = lio_droq_refill(droq);
> -
> - /* Flush the droq descriptor data to memory to be sure
> - * that when we update the credits the data in memory is
> - * accurate.
> - */
> - rte_wmb();
> - rte_write32(desc_refilled, droq->pkts_credit_reg);
> - /* make sure mmio write completes */
> - rte_wmb();
> - }
> -
> - info->length = 0;
> - info->rh.rh64 = 0;
> -
> - droq->stats.pkts_received++;
> - droq->stats.rx_pkts_received += data_pkts;
> - droq->stats.rx_bytes_received += data_total_len;
> - droq->stats.bytes_received += total_len;
> -
> - return data_pkts;
> -}
> -
> -static uint32_t
> -lio_droq_fast_process_packets(struct lio_device *lio_dev,
> - struct lio_droq *droq,
> - struct rte_mbuf **rx_pkts,
> - uint32_t pkts_to_process)
> -{
> - int ret, data_pkts = 0;
> - uint32_t pkt;
> -
> - for (pkt = 0; pkt < pkts_to_process; pkt++) {
> - ret = lio_droq_fast_process_packet(lio_dev, droq,
> - &rx_pkts[data_pkts]);
> - if (ret < 0) {
> - lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
> - lio_dev->port_id, droq->q_no,
> - droq->read_idx, pkts_to_process);
> - break;
> - }
> - data_pkts += ret;
> - }
> -
> - rte_atomic64_sub(&droq->pkts_pending, pkt);
> -
> - return data_pkts;
> -}
> -
> -static inline uint32_t
> -lio_droq_check_hw_for_pkts(struct lio_droq *droq)
> -{
> - uint32_t last_count;
> - uint32_t pkt_count;
> -
> - pkt_count = rte_read32(droq->pkts_sent_reg);
> -
> - last_count = pkt_count - droq->pkt_count;
> - droq->pkt_count = pkt_count;
> -
> - if (last_count)
> - rte_atomic64_add(&droq->pkts_pending, last_count);
> -
> - return last_count;
> -}
> -
> -uint16_t
> -lio_dev_recv_pkts(void *rx_queue,
> - struct rte_mbuf **rx_pkts,
> - uint16_t budget)
> -{
> - struct lio_droq *droq = rx_queue;
> - struct lio_device *lio_dev = droq->lio_dev;
> - uint32_t pkts_processed = 0;
> - uint32_t pkt_count = 0;
> -
> - lio_droq_check_hw_for_pkts(droq);
> -
> - pkt_count = rte_atomic64_read(&droq->pkts_pending);
> - if (!pkt_count)
> - return 0;
> -
> - if (pkt_count > budget)
> - pkt_count = budget;
> -
> - /* Grab the lock */
> - rte_spinlock_lock(&droq->lock);
> - pkts_processed = lio_droq_fast_process_packets(lio_dev,
> - droq, rx_pkts,
> - pkt_count);
> -
> - if (droq->pkt_count) {
> - rte_write32(droq->pkt_count, droq->pkts_sent_reg);
> - droq->pkt_count = 0;
> - }
> -
> - /* Release the spin lock */
> - rte_spinlock_unlock(&droq->lock);
> -
> - return pkts_processed;
> -}
> -
> -void
> -lio_delete_droq_queue(struct lio_device *lio_dev,
> - int oq_no)
> -{
> - lio_delete_droq(lio_dev, oq_no);
> - lio_dev->num_oqs--;
> - rte_free(lio_dev->droq[oq_no]);
> - lio_dev->droq[oq_no] = NULL;
> -}
> -
> -/**
> - * lio_init_instr_queue()
> - * @param lio_dev - pointer to the lio device structure.
> - * @param txpciq - queue to be initialized.
> - *
> - * Called at driver init time for each input queue. iq_conf has the
> - * configuration parameters for the queue.
> - *
> - * @return Success: 0 Failure: -1
> - */
> -static int
> -lio_init_instr_queue(struct lio_device *lio_dev,
> - union octeon_txpciq txpciq,
> - uint32_t num_descs, unsigned int socket_id)
> -{
> - uint32_t iq_no = (uint32_t)txpciq.s.q_no;
> - struct lio_instr_queue *iq;
> - uint32_t instr_type;
> - uint32_t q_size;
> -
> - instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
> -
> - q_size = instr_type * num_descs;
> - iq = lio_dev->instr_queue[iq_no];
> - iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> - "instr_queue", iq_no, q_size,
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (iq->iq_mz == NULL) {
> - lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
> - iq_no);
> - return -1;
> - }
> -
> - iq->base_addr_dma = iq->iq_mz->iova;
> - iq->base_addr = (uint8_t *)iq->iq_mz->addr;
> -
> - iq->nb_desc = num_descs;
> -
> - /* Initialize a list to holds requests that have been posted to Octeon
> - * but has yet to be fetched by octeon
> - */
> - iq->request_list = rte_zmalloc_socket("request_list",
> - sizeof(*iq->request_list) *
> - num_descs,
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (iq->request_list == NULL) {
> - lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
> - iq_no);
> - lio_dma_zone_free(lio_dev, iq->iq_mz);
> - return -1;
> - }
> -
> - lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
> - iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
> - iq->nb_desc);
> -
> - iq->lio_dev = lio_dev;
> - iq->txpciq.txpciq64 = txpciq.txpciq64;
> - iq->fill_cnt = 0;
> - iq->host_write_index = 0;
> - iq->lio_read_index = 0;
> - iq->flush_index = 0;
> -
> - rte_atomic64_set(&iq->instr_pending, 0);
> -
> - /* Initialize the spinlock for this instruction queue */
> - rte_spinlock_init(&iq->lock);
> - rte_spinlock_init(&iq->post_lock);
> -
> - rte_atomic64_clear(&iq->iq_flush_running);
> -
> - lio_dev->io_qmask.iq |= (1ULL << iq_no);
> -
> - /* Set the 32B/64B mode for each input queue */
> - lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
> - iq->iqcmd_64B = (instr_type == 64);
> -
> - lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
> -
> - return 0;
> -}
> -
> -int
> -lio_setup_instr_queue0(struct lio_device *lio_dev)
> -{
> - union octeon_txpciq txpciq;
> - uint32_t num_descs = 0;
> - uint32_t iq_no = 0;
> -
> - num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
> -
> - lio_dev->num_iqs = 0;
> -
> - lio_dev->instr_queue[0] = rte_zmalloc(NULL,
> - sizeof(struct lio_instr_queue), 0);
> - if (lio_dev->instr_queue[0] == NULL)
> - return -ENOMEM;
> -
> - lio_dev->instr_queue[0]->q_index = 0;
> - lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
> - txpciq.txpciq64 = 0;
> - txpciq.s.q_no = iq_no;
> - txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
> - txpciq.s.use_qpg = 0;
> - txpciq.s.qpg = 0;
> - if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
> - rte_free(lio_dev->instr_queue[0]);
> - lio_dev->instr_queue[0] = NULL;
> - return -1;
> - }
> -
> - lio_dev->num_iqs++;
> -
> - return 0;
> -}
> -
> -/**
> - * lio_delete_instr_queue()
> - * @param lio_dev - pointer to the lio device structure.
> - * @param iq_no - queue to be deleted.
> - *
> - * Called at driver unload time for each input queue. Deletes all
> - * allocated resources for the input queue.
> - */
> -static void
> -lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> -
> - rte_free(iq->request_list);
> - iq->request_list = NULL;
> - lio_dma_zone_free(lio_dev, iq->iq_mz);
> -}
> -
> -void
> -lio_free_instr_queue0(struct lio_device *lio_dev)
> -{
> - lio_delete_instr_queue(lio_dev, 0);
> - rte_free(lio_dev->instr_queue[0]);
> - lio_dev->instr_queue[0] = NULL;
> - lio_dev->num_iqs--;
> -}
> -
> -/* Return 0 on success, -1 on failure */
> -int
> -lio_setup_iq(struct lio_device *lio_dev, int q_index,
> - union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
> - unsigned int socket_id)
> -{
> - uint32_t iq_no = (uint32_t)txpciq.s.q_no;
> -
> - lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
> - sizeof(struct lio_instr_queue),
> - RTE_CACHE_LINE_SIZE, socket_id);
> - if (lio_dev->instr_queue[iq_no] == NULL)
> - return -1;
> -
> - lio_dev->instr_queue[iq_no]->q_index = q_index;
> - lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
> -
> - if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
> - rte_free(lio_dev->instr_queue[iq_no]);
> - lio_dev->instr_queue[iq_no] = NULL;
> - return -1;
> - }
> -
> - lio_dev->num_iqs++;
> -
> - return 0;
> -}
> -
> -int
> -lio_wait_for_instr_fetch(struct lio_device *lio_dev)
> -{
> - int pending, instr_cnt;
> - int i, retry = 1000;
> -
> - do {
> - instr_cnt = 0;
> -
> - for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
> - if (!(lio_dev->io_qmask.iq & (1ULL << i)))
> - continue;
> -
> - if (lio_dev->instr_queue[i] == NULL)
> - break;
> -
> - pending = rte_atomic64_read(
> - &lio_dev->instr_queue[i]->instr_pending);
> - if (pending)
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
> -
> - instr_cnt += pending;
> - }
> -
> - if (instr_cnt == 0)
> - break;
> -
> - rte_delay_ms(1);
> -
> - } while (retry-- && instr_cnt);
> -
> - return instr_cnt;
> -}
> -
> -static inline void
> -lio_ring_doorbell(struct lio_device *lio_dev,
> - struct lio_instr_queue *iq)
> -{
> - if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
> - rte_write32(iq->fill_cnt, iq->doorbell_reg);
> - /* make sure doorbell write goes through */
> - rte_wmb();
> - iq->fill_cnt = 0;
> - }
> -}
> -
> -static inline void
> -copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
> -{
> - uint8_t *iqptr, cmdsize;
> -
> - cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
> - iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
> -
> - rte_memcpy(iqptr, cmd, cmdsize);
> -}
> -
> -static inline struct lio_iq_post_status
> -post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
> -{
> - struct lio_iq_post_status st;
> -
> - st.status = LIO_IQ_SEND_OK;
> -
> - /* This ensures that the read index does not wrap around to the same
> - * position if queue gets full before Octeon could fetch any instr.
> - */
> - if (rte_atomic64_read(&iq->instr_pending) >=
> - (int32_t)(iq->nb_desc - 1)) {
> - st.status = LIO_IQ_SEND_FAILED;
> - st.index = -1;
> - return st;
> - }
> -
> - if (rte_atomic64_read(&iq->instr_pending) >=
> - (int32_t)(iq->nb_desc - 2))
> - st.status = LIO_IQ_SEND_STOP;
> -
> - copy_cmd_into_iq(iq, cmd);
> -
> - /* "index" is returned, host_write_index is modified. */
> - st.index = iq->host_write_index;
> - iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
> - iq->nb_desc);
> - iq->fill_cnt++;
> -
> - /* Flush the command into memory. We need to be sure the data is in
> - * memory before indicating that the instruction is pending.
> - */
> - rte_wmb();
> -
> - rte_atomic64_inc(&iq->instr_pending);
> -
> - return st;
> -}
> -
> -static inline void
> -lio_add_to_request_list(struct lio_instr_queue *iq,
> - int idx, void *buf, int reqtype)
> -{
> - iq->request_list[idx].buf = buf;
> - iq->request_list[idx].reqtype = reqtype;
> -}
> -
> -static inline void
> -lio_free_netsgbuf(void *buf)
> -{
> - struct lio_buf_free_info *finfo = buf;
> - struct lio_device *lio_dev = finfo->lio_dev;
> - struct rte_mbuf *m = finfo->mbuf;
> - struct lio_gather *g = finfo->g;
> - uint8_t iq = finfo->iq_no;
> -
> - /* This will take care of multiple segments also */
> - rte_pktmbuf_free(m);
> -
> - rte_spinlock_lock(&lio_dev->glist_lock[iq]);
> - STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
> - rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
> - rte_free(finfo);
> -}
> -
> -/* Can only run in process context */
> -static int
> -lio_process_iq_request_list(struct lio_device *lio_dev,
> - struct lio_instr_queue *iq)
> -{
> - struct octeon_instr_irh *irh = NULL;
> - uint32_t old = iq->flush_index;
> - struct lio_soft_command *sc;
> - uint32_t inst_count = 0;
> - int reqtype;
> - void *buf;
> -
> - while (old != iq->lio_read_index) {
> - reqtype = iq->request_list[old].reqtype;
> - buf = iq->request_list[old].buf;
> -
> - if (reqtype == LIO_REQTYPE_NONE)
> - goto skip_this;
> -
> - switch (reqtype) {
> - case LIO_REQTYPE_NORESP_NET:
> - rte_pktmbuf_free((struct rte_mbuf *)buf);
> - break;
> - case LIO_REQTYPE_NORESP_NET_SG:
> - lio_free_netsgbuf(buf);
> - break;
> - case LIO_REQTYPE_SOFT_COMMAND:
> - sc = buf;
> - irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> - if (irh->rflag) {
> - /* We're expecting a response from Octeon.
> - * It's up to lio_process_ordered_list() to
> - * process sc. Add sc to the ordered soft
> - * command response list because we expect
> - * a response from Octeon.
> - */
> - rte_spinlock_lock(&lio_dev->response_list.lock);
> - rte_atomic64_inc(
> - &lio_dev->response_list.pending_req_count);
> - STAILQ_INSERT_TAIL(
> - &lio_dev->response_list.head,
> - &sc->node, entries);
> - rte_spinlock_unlock(
> - &lio_dev->response_list.lock);
> - } else {
> - if (sc->callback) {
> - /* This callback must not sleep */
> - sc->callback(LIO_REQUEST_DONE,
> - sc->callback_arg);
> - }
> - }
> - break;
> - default:
> - lio_dev_err(lio_dev,
> - "Unknown reqtype: %d buf: %p at idx %d\n",
> - reqtype, buf, old);
> - }
> -
> - iq->request_list[old].buf = NULL;
> - iq->request_list[old].reqtype = 0;
> -
> -skip_this:
> - inst_count++;
> - old = lio_incr_index(old, 1, iq->nb_desc);
> - }
> -
> - iq->flush_index = old;
> -
> - return inst_count;
> -}
> -
> -static void
> -lio_update_read_index(struct lio_instr_queue *iq)
> -{
> - uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
> - uint32_t last_done;
> -
> - last_done = pkt_in_done - iq->pkt_in_done;
> - iq->pkt_in_done = pkt_in_done;
> -
> - /* Add last_done and modulo with the IQ size to get new index */
> - iq->lio_read_index = (iq->lio_read_index +
> - (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
> - iq->nb_desc;
> -}
> -
> -int
> -lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
> -{
> - uint32_t inst_processed = 0;
> - int tx_done = 1;
> -
> - if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
> - return tx_done;
> -
> - rte_spinlock_lock(&iq->lock);
> -
> - lio_update_read_index(iq);
> -
> - do {
> - /* Process any outstanding IQ packets. */
> - if (iq->flush_index == iq->lio_read_index)
> - break;
> -
> - inst_processed = lio_process_iq_request_list(lio_dev, iq);
> -
> - if (inst_processed) {
> - rte_atomic64_sub(&iq->instr_pending, inst_processed);
> - iq->stats.instr_processed += inst_processed;
> - }
> -
> - inst_processed = 0;
> -
> - } while (1);
> -
> - rte_spinlock_unlock(&iq->lock);
> -
> - rte_atomic64_clear(&iq->iq_flush_running);
> -
> - return tx_done;
> -}
> -
> -static int
> -lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
> - void *buf, uint32_t datasize, uint32_t reqtype)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> - struct lio_iq_post_status st;
> -
> - rte_spinlock_lock(&iq->post_lock);
> -
> - st = post_command2(iq, cmd);
> -
> - if (st.status != LIO_IQ_SEND_FAILED) {
> - lio_add_to_request_list(iq, st.index, buf, reqtype);
> - LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
> - datasize);
> - LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
> -
> - lio_ring_doorbell(lio_dev, iq);
> - } else {
> - LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
> - }
> -
> - rte_spinlock_unlock(&iq->post_lock);
> -
> - return st.status;
> -}
> -
> -void
> -lio_prepare_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc, uint8_t opcode,
> - uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
> - uint64_t ossp1)
> -{
> - struct octeon_instr_pki_ih3 *pki_ih3;
> - struct octeon_instr_ih3 *ih3;
> - struct octeon_instr_irh *irh;
> - struct octeon_instr_rdp *rdp;
> -
> - RTE_ASSERT(opcode <= 15);
> - RTE_ASSERT(subcode <= 127);
> -
> - ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
> -
> - ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
> -
> - pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
> -
> - pki_ih3->w = 1;
> - pki_ih3->raw = 1;
> - pki_ih3->utag = 1;
> - pki_ih3->uqpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
> - pki_ih3->utt = 1;
> -
> - pki_ih3->tag = LIO_CONTROL;
> - pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
> - pki_ih3->qpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
> - pki_ih3->pm = 0x7;
> - pki_ih3->sl = 8;
> -
> - if (sc->datasize)
> - ih3->dlengsz = sc->datasize;
> -
> - irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> - irh->opcode = opcode;
> - irh->subcode = subcode;
> -
> - /* opcode/subcode specific parameters (ossp) */
> - irh->ossp = irh_ossp;
> - sc->cmd.cmd3.ossp[0] = ossp0;
> - sc->cmd.cmd3.ossp[1] = ossp1;
> -
> - if (sc->rdatasize) {
> - rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
> - rdp->pcie_port = lio_dev->pcie_port;
> - rdp->rlen = sc->rdatasize;
> - irh->rflag = 1;
> - /* PKI IH3 */
> - ih3->fsz = OCTEON_SOFT_CMD_RESP_IH3;
> - } else {
> - irh->rflag = 0;
> - /* PKI IH3 */
> - ih3->fsz = OCTEON_PCI_CMD_O3;
> - }
> -}
> -
> -int
> -lio_send_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc)
> -{
> - struct octeon_instr_ih3 *ih3;
> - struct octeon_instr_irh *irh;
> - uint32_t len = 0;
> -
> - ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
> - if (ih3->dlengsz) {
> - RTE_ASSERT(sc->dmadptr);
> - sc->cmd.cmd3.dptr = sc->dmadptr;
> - }
> -
> - irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> - if (irh->rflag) {
> - RTE_ASSERT(sc->dmarptr);
> - RTE_ASSERT(sc->status_word != NULL);
> - *sc->status_word = LIO_COMPLETION_WORD_INIT;
> - sc->cmd.cmd3.rptr = sc->dmarptr;
> - }
> -
> - len = (uint32_t)ih3->dlengsz;
> -
> - if (sc->wait_time)
> - sc->timeout = lio_uptime + sc->wait_time;
> -
> - return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
> - LIO_REQTYPE_SOFT_COMMAND);
> -}
> -
> -int
> -lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
> -{
> - char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
> - uint16_t buf_size;
> -
> - buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
> - snprintf(sc_pool_name, sizeof(sc_pool_name),
> - "lio_sc_pool_%u", lio_dev->port_id);
> - lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
> - LIO_MAX_SOFT_COMMAND_BUFFERS,
> - 0, 0, buf_size, SOCKET_ID_ANY);
> - return 0;
> -}
> -
> -void
> -lio_free_sc_buffer_pool(struct lio_device *lio_dev)
> -{
> - rte_mempool_free(lio_dev->sc_buf_pool);
> -}
> -
> -struct lio_soft_command *
> -lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
> - uint32_t rdatasize, uint32_t ctxsize)
> -{
> - uint32_t offset = sizeof(struct lio_soft_command);
> - struct lio_soft_command *sc;
> - struct rte_mbuf *m;
> - uint64_t dma_addr;
> -
> - RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
> - LIO_SOFT_COMMAND_BUFFER_SIZE);
> -
> - m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
> - if (m == NULL) {
> - lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
> - return NULL;
> - }
> -
> - /* set rte_mbuf data size and there is only 1 segment */
> - m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
> - m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
> -
> - /* use rte_mbuf buffer for soft command */
> - sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
> - memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
> - sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
> - sc->dma_addr = rte_mbuf_data_iova(m);
> - sc->mbuf = m;
> -
> - dma_addr = sc->dma_addr;
> -
> - if (ctxsize) {
> - sc->ctxptr = (uint8_t *)sc + offset;
> - sc->ctxsize = ctxsize;
> - }
> -
> - /* Start data at 128 byte boundary */
> - offset = (offset + ctxsize + 127) & 0xffffff80;
> -
> - if (datasize) {
> - sc->virtdptr = (uint8_t *)sc + offset;
> - sc->dmadptr = dma_addr + offset;
> - sc->datasize = datasize;
> - }
> -
> - /* Start rdata at 128 byte boundary */
> - offset = (offset + datasize + 127) & 0xffffff80;
> -
> - if (rdatasize) {
> - RTE_ASSERT(rdatasize >= 16);
> - sc->virtrptr = (uint8_t *)sc + offset;
> - sc->dmarptr = dma_addr + offset;
> - sc->rdatasize = rdatasize;
> - sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
> - rdatasize - 8);
> - }
> -
> - return sc;
> -}
> -
> -void
> -lio_free_soft_command(struct lio_soft_command *sc)
> -{
> - rte_pktmbuf_free(sc->mbuf);
> -}
> -
> -void
> -lio_setup_response_list(struct lio_device *lio_dev)
> -{
> - STAILQ_INIT(&lio_dev->response_list.head);
> - rte_spinlock_init(&lio_dev->response_list.lock);
> - rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
> -}
> -
> -int
> -lio_process_ordered_list(struct lio_device *lio_dev)
> -{
> - int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
> - struct lio_response_list *ordered_sc_list;
> - struct lio_soft_command *sc;
> - int request_complete = 0;
> - uint64_t status64;
> - uint32_t status;
> -
> - ordered_sc_list = &lio_dev->response_list;
> -
> - do {
> - rte_spinlock_lock(&ordered_sc_list->lock);
> -
> - if (STAILQ_EMPTY(&ordered_sc_list->head)) {
> - /* ordered_sc_list is empty; there is
> - * nothing to process
> - */
> - rte_spinlock_unlock(&ordered_sc_list->lock);
> - return -1;
> - }
> -
> - sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
> - struct lio_soft_command, node);
> -
> - status = LIO_REQUEST_PENDING;
> -
> - /* check if octeon has finished DMA'ing a response
> - * to where rptr is pointing to
> - */
> - status64 = *sc->status_word;
> -
> - if (status64 != LIO_COMPLETION_WORD_INIT) {
> - /* This logic ensures that all 64b have been written.
> - * 1. check byte 0 for non-FF
> - * 2. if non-FF, then swap result from BE to host order
> - * 3. check byte 7 (swapped to 0) for non-FF
> - * 4. if non-FF, use the low 32-bit status code
> - * 5. if either byte 0 or byte 7 is FF, don't use status
> - */
> - if ((status64 & 0xff) != 0xff) {
> - lio_swap_8B_data(&status64, 1);
> - if (((status64 & 0xff) != 0xff)) {
> - /* retrieve 16-bit firmware status */
> - status = (uint32_t)(status64 &
> - 0xffffULL);
> - if (status) {
> - status =
> - LIO_FIRMWARE_STATUS_CODE(
> - status);
> - } else {
> - /* i.e. no error */
> - status = LIO_REQUEST_DONE;
> - }
> - }
> - }
> - } else if ((sc->timeout && lio_check_timeout(lio_uptime,
> - sc->timeout))) {
> - lio_dev_err(lio_dev,
> - "cmd failed, timeout (%ld, %ld)\n",
> - (long)lio_uptime, (long)sc->timeout);
> - status = LIO_REQUEST_TIMEOUT;
> - }
> -
> - if (status != LIO_REQUEST_PENDING) {
> - /* we have received a response or we have timed out.
> - * remove node from linked list
> - */
> - STAILQ_REMOVE(&ordered_sc_list->head,
> - &sc->node, lio_stailq_node, entries);
> - rte_atomic64_dec(
> - &lio_dev->response_list.pending_req_count);
> - rte_spinlock_unlock(&ordered_sc_list->lock);
> -
> - if (sc->callback)
> - sc->callback(status, sc->callback_arg);
> -
> - request_complete++;
> - } else {
> - /* no response yet */
> - request_complete = 0;
> - rte_spinlock_unlock(&ordered_sc_list->lock);
> - }
> -
> - /* If we hit the Max Ordered requests to process every loop,
> - * we quit and let this function be invoked the next time
> - * the poll thread runs to process the remaining requests.
> - * This function can take up the entire CPU if there is
> - * no upper limit to the requests processed.
> - */
> - if (request_complete >= resp_to_process)
> - break;
> - } while (request_complete);
> -
> - return 0;
> -}
> -
> -static inline struct lio_stailq_node *
> -list_delete_first_node(struct lio_stailq_head *head)
> -{
> - struct lio_stailq_node *node;
> -
> - if (STAILQ_EMPTY(head))
> - node = NULL;
> - else
> - node = STAILQ_FIRST(head);
> -
> - if (node)
> - STAILQ_REMOVE(head, node, lio_stailq_node, entries);
> -
> - return node;
> -}
> -
> -void
> -lio_delete_sglist(struct lio_instr_queue *txq)
> -{
> - struct lio_device *lio_dev = txq->lio_dev;
> - int iq_no = txq->q_index;
> - struct lio_gather *g;
> -
> - if (lio_dev->glist_head == NULL)
> - return;
> -
> - do {
> - g = (struct lio_gather *)list_delete_first_node(
> - &lio_dev->glist_head[iq_no]);
> - if (g) {
> - if (g->sg)
> - rte_free(
> - (void *)((unsigned long)g->sg - g->adjust));
> - rte_free(g);
> - }
> - } while (g);
> -}
> -
> -/**
> - * \brief Setup gather lists
> - * @param lio per-network private data
> - */
> -int
> -lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
> - int fw_mapped_iq, int num_descs, unsigned int socket_id)
> -{
> - struct lio_gather *g;
> - int i;
> -
> - rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
> -
> - STAILQ_INIT(&lio_dev->glist_head[iq_no]);
> -
> - for (i = 0; i < num_descs; i++) {
> - g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (g == NULL) {
> - lio_dev_err(lio_dev,
> - "lio_gather memory allocation failed for qno %d\n",
> - iq_no);
> - break;
> - }
> -
> - g->sg_size =
> - ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
> -
> - g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
> - RTE_CACHE_LINE_SIZE, socket_id);
> - if (g->sg == NULL) {
> - lio_dev_err(lio_dev,
> - "sg list memory allocation failed for qno %d\n",
> - iq_no);
> - rte_free(g);
> - break;
> - }
> -
> - /* The gather component should be aligned on 64-bit boundary */
> - if (((unsigned long)g->sg) & 7) {
> - g->adjust = 8 - (((unsigned long)g->sg) & 7);
> - g->sg =
> - (struct lio_sg_entry *)((unsigned long)g->sg +
> - g->adjust);
> - }
> -
> - STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
> - entries);
> - }
> -
> - if (i != num_descs) {
> - lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
> - return -ENOMEM;
> - }
> -
> - return 0;
> -}
> -
> -void
> -lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
> -{
> - lio_delete_instr_queue(lio_dev, iq_no);
> - rte_free(lio_dev->instr_queue[iq_no]);
> - lio_dev->instr_queue[iq_no] = NULL;
> - lio_dev->num_iqs--;
> -}
> -
> -static inline uint32_t
> -lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
> -{
> - return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
> - (uint32_t)rte_atomic64_read(
> - &lio_dev->instr_queue[q_no]->instr_pending));
> -}
> -
> -static inline int
> -lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
> -{
> - return ((uint32_t)rte_atomic64_read(
> - &lio_dev->instr_queue[q_no]->instr_pending) >=
> - (lio_dev->instr_queue[q_no]->nb_desc - 2));
> -}
> -
> -static int
> -lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> - uint32_t count = 10000;
> -
> - while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
> - --count)
> - lio_flush_iq(lio_dev, iq);
> -
> - return count ? 0 : 1;
> -}
> -
> -static void
> -lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
> -{
> - struct lio_soft_command *sc = sc_ptr;
> - struct lio_dev_ctrl_cmd *ctrl_cmd;
> - struct lio_ctrl_pkt *ctrl_pkt;
> -
> - ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
> - ctrl_cmd = ctrl_pkt->ctrl_cmd;
> - ctrl_cmd->cond = 1;
> -
> - lio_free_soft_command(sc);
> -}
> -
> -static inline struct lio_soft_command *
> -lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
> - struct lio_ctrl_pkt *ctrl_pkt)
> -{
> - struct lio_soft_command *sc = NULL;
> - uint32_t uddsize, datasize;
> - uint32_t rdatasize;
> - uint8_t *data;
> -
> - uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
> -
> - datasize = OCTEON_CMD_SIZE + uddsize;
> - rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
> -
> - sc = lio_alloc_soft_command(lio_dev, datasize,
> - rdatasize, sizeof(struct lio_ctrl_pkt));
> - if (sc == NULL)
> - return NULL;
> -
> - rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
> -
> - data = (uint8_t *)sc->virtdptr;
> -
> - rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
> -
> - lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
> -
> - if (uddsize) {
> - /* Endian-Swap for UDD should have been done by caller. */
> - rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
> - }
> -
> - sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
> -
> - lio_prepare_soft_command(lio_dev, sc,
> - LIO_OPCODE, LIO_OPCODE_CMD,
> - 0, 0, 0);
> -
> - sc->callback = lio_ctrl_cmd_callback;
> - sc->callback_arg = sc;
> - sc->wait_time = ctrl_pkt->wait_time;
> -
> - return sc;
> -}
> -
> -int
> -lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
> -{
> - struct lio_soft_command *sc = NULL;
> - int retval;
> -
> - sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
> - if (sc == NULL) {
> - lio_dev_err(lio_dev, "soft command allocation failed\n");
> - return -1;
> - }
> -
> - retval = lio_send_soft_command(lio_dev, sc);
> - if (retval == LIO_IQ_SEND_FAILED) {
> - lio_free_soft_command(sc);
> - lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
> - lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
> - return -1;
> - }
> -
> - return retval;
> -}
> -
> -/** Send data packet to the device
> - * @param lio_dev - lio device pointer
> - * @param ndata - control structure with queueing, and buffer information
> - *
> - * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
> - * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
> - */
> -static inline int
> -lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
> -{
> - return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
> - ndata->buf, ndata->datasize, ndata->reqtype);
> -}
> -
> -uint16_t
> -lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
> -{
> - struct lio_instr_queue *txq = tx_queue;
> - union lio_cmd_setup cmdsetup;
> - struct lio_device *lio_dev;
> - struct lio_iq_stats *stats;
> - struct lio_data_pkt ndata;
> - int i, processed = 0;
> - struct rte_mbuf *m;
> - uint32_t tag = 0;
> - int status = 0;
> - int iq_no;
> -
> - lio_dev = txq->lio_dev;
> - iq_no = txq->txpciq.s.q_no;
> - stats = &lio_dev->instr_queue[iq_no]->stats;
> -
> - if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
> - PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
> - lio_dev->linfo.link.s.link_up);
> - goto xmit_failed;
> - }
> -
> - lio_dev_cleanup_iq(lio_dev, iq_no);
> -
> - for (i = 0; i < nb_pkts; i++) {
> - uint32_t pkt_len = 0;
> -
> - m = pkts[i];
> -
> - /* Prepare the attributes for the data to be passed to BASE. */
> - memset(&ndata, 0, sizeof(struct lio_data_pkt));
> -
> - ndata.buf = m;
> -
> - ndata.q_no = iq_no;
> - if (lio_iq_is_full(lio_dev, ndata.q_no)) {
> - stats->tx_iq_busy++;
> - if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
> - PMD_TX_LOG(lio_dev, ERR,
> - "Transmit failed iq:%d full\n",
> - ndata.q_no);
> - break;
> - }
> - }
> -
> - cmdsetup.cmd_setup64 = 0;
> - cmdsetup.s.iq_no = iq_no;
> -
> - /* check checksum offload flags to form cmd */
> - if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
> - cmdsetup.s.ip_csum = 1;
> -
> - if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
> - cmdsetup.s.tnl_csum = 1;
> - else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
> - (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
> - cmdsetup.s.transport_csum = 1;
> -
> - if (m->nb_segs == 1) {
> - pkt_len = rte_pktmbuf_data_len(m);
> - cmdsetup.s.u.datasize = pkt_len;
> - lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
> - &cmdsetup, tag);
> - ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
> - ndata.reqtype = LIO_REQTYPE_NORESP_NET;
> - } else {
> - struct lio_buf_free_info *finfo;
> - struct lio_gather *g;
> - rte_iova_t phyaddr;
> - int i, frags;
> -
> - finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
> - sizeof(*finfo), 0);
> - if (finfo == NULL) {
> - PMD_TX_LOG(lio_dev, ERR,
> - "free buffer alloc failed\n");
> - goto xmit_failed;
> - }
> -
> - rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
> - g = (struct lio_gather *)list_delete_first_node(
> - &lio_dev->glist_head[iq_no]);
> - rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
> - if (g == NULL) {
> - PMD_TX_LOG(lio_dev, ERR,
> - "Transmit scatter gather: glist null!\n");
> - goto xmit_failed;
> - }
> -
> - cmdsetup.s.gather = 1;
> - cmdsetup.s.u.gatherptrs = m->nb_segs;
> - lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
> - &cmdsetup, tag);
> -
> - memset(g->sg, 0, g->sg_size);
> - g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
> - lio_add_sg_size(&g->sg[0], m->data_len, 0);
> - pkt_len = m->data_len;
> - finfo->mbuf = m;
> -
> - /* First seg taken care above */
> - frags = m->nb_segs - 1;
> - i = 1;
> - m = m->next;
> - while (frags--) {
> - g->sg[(i >> 2)].ptr[(i & 3)] =
> - rte_mbuf_data_iova(m);
> - lio_add_sg_size(&g->sg[(i >> 2)],
> - m->data_len, (i & 3));
> - pkt_len += m->data_len;
> - i++;
> - m = m->next;
> - }
> -
> - phyaddr = rte_mem_virt2iova(g->sg);
> - if (phyaddr == RTE_BAD_IOVA) {
> - PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
> - goto xmit_failed;
> - }
> -
> - ndata.cmd.cmd3.dptr = phyaddr;
> - ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
> -
> - finfo->g = g;
> - finfo->lio_dev = lio_dev;
> - finfo->iq_no = (uint64_t)iq_no;
> - ndata.buf = finfo;
> - }
> -
> - ndata.datasize = pkt_len;
> -
> - status = lio_send_data_pkt(lio_dev, &ndata);
> -
> - if (unlikely(status == LIO_IQ_SEND_FAILED)) {
> - PMD_TX_LOG(lio_dev, ERR, "send failed\n");
> - break;
> - }
> -
> - if (unlikely(status == LIO_IQ_SEND_STOP)) {
> - PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
> - /* create space as iq is full */
> - lio_dev_cleanup_iq(lio_dev, iq_no);
> - }
> -
> - stats->tx_done++;
> - stats->tx_tot_bytes += pkt_len;
> - processed++;
> - }
> -
> -xmit_failed:
> - stats->tx_dropped += (nb_pkts - processed);
> -
> - return processed;
> -}
> -
> -void
> -lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_instr_queue *txq;
> - struct lio_droq *rxq;
> - uint16_t i;
> -
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - txq = eth_dev->data->tx_queues[i];
> - if (txq != NULL) {
> - lio_dev_tx_queue_release(eth_dev, i);
> - eth_dev->data->tx_queues[i] = NULL;
> - }
> - }
> -
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - rxq = eth_dev->data->rx_queues[i];
> - if (rxq != NULL) {
> - lio_dev_rx_queue_release(eth_dev, i);
> - eth_dev->data->rx_queues[i] = NULL;
> - }
> - }
> -}
> diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
> deleted file mode 100644
> index d2a45104f0..0000000000
> --- a/drivers/net/liquidio/lio_rxtx.h
> +++ /dev/null
> @@ -1,740 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_RXTX_H_
> -#define _LIO_RXTX_H_
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -
> -#include <rte_spinlock.h>
> -#include <rte_memory.h>
> -
> -#include "lio_struct.h"
> -
> -#ifndef ROUNDUP4
> -#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
> -#endif
> -
> -#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem) \
> - (type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
> -
> -#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
> -
> -#define lio_uptime \
> - (size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
> -
> -/** Descriptor format.
> - * The descriptor ring is made of descriptors which have 2 64-bit values:
> - * -# Physical (bus) address of the data buffer.
> - * -# Physical (bus) address of a lio_droq_info structure.
> - * The device DMA's incoming packets and its information at the address
> - * given by these descriptor fields.
> - */
> -struct lio_droq_desc {
> - /** The buffer pointer */
> - uint64_t buffer_ptr;
> -
> - /** The Info pointer */
> - uint64_t info_ptr;
> -};
> -
> -#define LIO_DROQ_DESC_SIZE (sizeof(struct lio_droq_desc))
> -
> -/** Information about packet DMA'ed by Octeon.
> - * The format of the information available at Info Pointer after Octeon
> - * has posted a packet. Not all descriptors have valid information. Only
> - * the Info field of the first descriptor for a packet has information
> - * about the packet.
> - */
> -struct lio_droq_info {
> - /** The Output Receive Header. */
> - union octeon_rh rh;
> -
> - /** The Length of the packet. */
> - uint64_t length;
> -};
> -
> -#define LIO_DROQ_INFO_SIZE (sizeof(struct lio_droq_info))
> -
> -/** Pointer to data buffer.
> - * Driver keeps a pointer to the data buffer that it made available to
> - * the Octeon device. Since the descriptor ring keeps physical (bus)
> - * addresses, this field is required for the driver to keep track of
> - * the virtual address pointers.
> - */
> -struct lio_recv_buffer {
> - /** Packet buffer, including meta data. */
> - void *buffer;
> -
> - /** Data in the packet buffer. */
> - uint8_t *data;
> -
> -};
> -
> -#define LIO_DROQ_RECVBUF_SIZE (sizeof(struct lio_recv_buffer))
> -
> -#define LIO_DROQ_SIZE (sizeof(struct lio_droq))
> -
> -#define LIO_IQ_SEND_OK 0
> -#define LIO_IQ_SEND_STOP 1
> -#define LIO_IQ_SEND_FAILED -1
> -
> -/* conditions */
> -#define LIO_REQTYPE_NONE 0
> -#define LIO_REQTYPE_NORESP_NET 1
> -#define LIO_REQTYPE_NORESP_NET_SG 2
> -#define LIO_REQTYPE_SOFT_COMMAND 3
> -
> -struct lio_request_list {
> - uint32_t reqtype;
> - void *buf;
> -};
> -
> -/*---------------------- INSTRUCTION FORMAT ----------------------------*/
> -
> -struct lio_instr3_64B {
> - /** Pointer where the input data is available. */
> - uint64_t dptr;
> -
> - /** Instruction Header. */
> - uint64_t ih3;
> -
> - /** Instruction Header. */
> - uint64_t pki_ih3;
> -
> - /** Input Request Header. */
> - uint64_t irh;
> -
> - /** opcode/subcode specific parameters */
> - uint64_t ossp[2];
> -
> - /** Return Data Parameters */
> - uint64_t rdp;
> -
> - /** Pointer where the response for a RAW mode packet will be written
> - * by Octeon.
> - */
> - uint64_t rptr;
> -
> -};
> -
> -union lio_instr_64B {
> - struct lio_instr3_64B cmd3;
> -};
> -
> -/** The size of each buffer in soft command buffer pool */
> -#define LIO_SOFT_COMMAND_BUFFER_SIZE 1536
> -
> -/** Maximum number of buffers to allocate into soft command buffer pool */
> -#define LIO_MAX_SOFT_COMMAND_BUFFERS 255
> -
> -struct lio_soft_command {
> - /** Soft command buffer info. */
> - struct lio_stailq_node node;
> - uint64_t dma_addr;
> - uint32_t size;
> -
> - /** Command and return status */
> - union lio_instr_64B cmd;
> -
> -#define LIO_COMPLETION_WORD_INIT 0xffffffffffffffffULL
> - uint64_t *status_word;
> -
> - /** Data buffer info */
> - void *virtdptr;
> - uint64_t dmadptr;
> - uint32_t datasize;
> -
> - /** Return buffer info */
> - void *virtrptr;
> - uint64_t dmarptr;
> - uint32_t rdatasize;
> -
> - /** Context buffer info */
> - void *ctxptr;
> - uint32_t ctxsize;
> -
> - /** Time out and callback */
> - size_t wait_time;
> - size_t timeout;
> - uint32_t iq_no;
> - void (*callback)(uint32_t, void *);
> - void *callback_arg;
> - struct rte_mbuf *mbuf;
> -};
> -
> -struct lio_iq_post_status {
> - int status;
> - int index;
> -};
> -
> -/* wqe
> - * --------------- 0
> - * | wqe word0-3 |
> - * --------------- 32
> - * | PCI IH |
> - * --------------- 40
> - * | RPTR |
> - * --------------- 48
> - * | PCI IRH |
> - * --------------- 56
> - * | OCTEON_CMD |
> - * --------------- 64
> - * | Addtl 8-BData |
> - * | |
> - * ---------------
> - */
> -
> -union octeon_cmd {
> - uint64_t cmd64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t cmd : 5;
> -
> - uint64_t more : 6; /* How many udd words follow the command */
> -
> - uint64_t reserved : 29;
> -
> - uint64_t param1 : 16;
> -
> - uint64_t param2 : 8;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -
> - uint64_t param2 : 8;
> -
> - uint64_t param1 : 16;
> -
> - uint64_t reserved : 29;
> -
> - uint64_t more : 6;
> -
> - uint64_t cmd : 5;
> -
> -#endif
> - } s;
> -};
> -
> -#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
> -
> -/* Maximum number of 8-byte words can be
> - * sent in a NIC control message.
> - */
> -#define LIO_MAX_NCTRL_UDD 32
> -
> -/* Structure of control information passed by driver to the BASE
> - * layer when sending control commands to Octeon device software.
> - */
> -struct lio_ctrl_pkt {
> - /** Command to be passed to the Octeon device software. */
> - union octeon_cmd ncmd;
> -
> - /** Send buffer */
> - void *data;
> - uint64_t dmadata;
> -
> - /** Response buffer */
> - void *rdata;
> - uint64_t dmardata;
> -
> - /** Additional data that may be needed by some commands. */
> - uint64_t udd[LIO_MAX_NCTRL_UDD];
> -
> - /** Input queue to use to send this command. */
> - uint64_t iq_no;
> -
> - /** Time to wait for Octeon software to respond to this control command.
> - * If wait_time is 0, BASE assumes no response is expected.
> - */
> - size_t wait_time;
> -
> - struct lio_dev_ctrl_cmd *ctrl_cmd;
> -};
> -
> -/** Structure of data information passed by driver to the BASE
> - * layer when forwarding data to Octeon device software.
> - */
> -struct lio_data_pkt {
> - /** Pointer to information maintained by NIC module for this packet. The
> - * BASE layer passes this as-is to the driver.
> - */
> - void *buf;
> -
> - /** Type of buffer passed in "buf" above. */
> - uint32_t reqtype;
> -
> - /** Total data bytes to be transferred in this command. */
> - uint32_t datasize;
> -
> - /** Command to be passed to the Octeon device software. */
> - union lio_instr_64B cmd;
> -
> - /** Input queue to use to send this command. */
> - uint32_t q_no;
> -};
> -
> -/** Structure passed by driver to BASE layer to prepare a command to send
> - * network data to Octeon.
> - */
> -union lio_cmd_setup {
> - struct {
> - uint32_t iq_no : 8;
> - uint32_t gather : 1;
> - uint32_t timestamp : 1;
> - uint32_t ip_csum : 1;
> - uint32_t transport_csum : 1;
> - uint32_t tnl_csum : 1;
> - uint32_t rsvd : 19;
> -
> - union {
> - uint32_t datasize;
> - uint32_t gatherptrs;
> - } u;
> - } s;
> -
> - uint64_t cmd_setup64;
> -};
> -
> -/* Instruction Header */
> -struct octeon_instr_ih3 {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> - /** Reserved3 */
> - uint64_t reserved3 : 1;
> -
> - /** Gather indicator 1=gather*/
> - uint64_t gather : 1;
> -
> - /** Data length OR no. of entries in gather list */
> - uint64_t dlengsz : 14;
> -
> - /** Front Data size */
> - uint64_t fsz : 6;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 4;
> -
> - /** PKI port kind - PKIND */
> - uint64_t pkind : 6;
> -
> - /** Reserved1 */
> - uint64_t reserved1 : 32;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - /** Reserved1 */
> - uint64_t reserved1 : 32;
> -
> - /** PKI port kind - PKIND */
> - uint64_t pkind : 6;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 4;
> -
> - /** Front Data size */
> - uint64_t fsz : 6;
> -
> - /** Data length OR no. of entries in gather list */
> - uint64_t dlengsz : 14;
> -
> - /** Gather indicator 1=gather*/
> - uint64_t gather : 1;
> -
> - /** Reserved3 */
> - uint64_t reserved3 : 1;
> -
> -#endif
> -};
> -
> -/* PKI Instruction Header(PKI IH) */
> -struct octeon_instr_pki_ih3 {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> - /** Wider bit */
> - uint64_t w : 1;
> -
> - /** Raw mode indicator 1 = RAW */
> - uint64_t raw : 1;
> -
> - /** Use Tag */
> - uint64_t utag : 1;
> -
> - /** Use QPG */
> - uint64_t uqpg : 1;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 1;
> -
> - /** Parse Mode */
> - uint64_t pm : 3;
> -
> - /** Skip Length */
> - uint64_t sl : 8;
> -
> - /** Use Tag Type */
> - uint64_t utt : 1;
> -
> - /** Tag type */
> - uint64_t tagtype : 2;
> -
> - /** Reserved1 */
> - uint64_t reserved1 : 2;
> -
> - /** QPG Value */
> - uint64_t qpg : 11;
> -
> - /** Tag Value */
> - uint64_t tag : 32;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -
> - /** Tag Value */
> - uint64_t tag : 32;
> -
> - /** QPG Value */
> - uint64_t qpg : 11;
> -
> - /** Reserved1 */
> - uint64_t reserved1 : 2;
> -
> - /** Tag type */
> - uint64_t tagtype : 2;
> -
> - /** Use Tag Type */
> - uint64_t utt : 1;
> -
> - /** Skip Length */
> - uint64_t sl : 8;
> -
> - /** Parse Mode */
> - uint64_t pm : 3;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 1;
> -
> - /** Use QPG */
> - uint64_t uqpg : 1;
> -
> - /** Use Tag */
> - uint64_t utag : 1;
> -
> - /** Raw mode indicator 1 = RAW */
> - uint64_t raw : 1;
> -
> - /** Wider bit */
> - uint64_t w : 1;
> -#endif
> -};
> -
> -/** Input Request Header */
> -struct octeon_instr_irh {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t opcode : 4;
> - uint64_t rflag : 1;
> - uint64_t subcode : 7;
> - uint64_t vlan : 12;
> - uint64_t priority : 3;
> - uint64_t reserved : 5;
> - uint64_t ossp : 32; /* opcode/subcode specific parameters */
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - uint64_t ossp : 32; /* opcode/subcode specific parameters */
> - uint64_t reserved : 5;
> - uint64_t priority : 3;
> - uint64_t vlan : 12;
> - uint64_t subcode : 7;
> - uint64_t rflag : 1;
> - uint64_t opcode : 4;
> -#endif
> -};
> -
> -/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
> -#define OCTEON_SOFT_CMD_RESP_IH3 (40 + 8)
> -/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
> -#define OCTEON_PCI_CMD_O3 (24 + 8)
> -
> -/** Return Data Parameters */
> -struct octeon_instr_rdp {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t reserved : 49;
> - uint64_t pcie_port : 3;
> - uint64_t rlen : 12;
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - uint64_t rlen : 12;
> - uint64_t pcie_port : 3;
> - uint64_t reserved : 49;
> -#endif
> -};
> -
> -union octeon_packet_params {
> - uint32_t pkt_params32;
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint32_t reserved : 24;
> - uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
> - /* Perform Outer transport header checksum */
> - uint32_t transport_csum : 1;
> - /* Find tunnel, and perform transport csum. */
> - uint32_t tnl_csum : 1;
> - uint32_t tsflag : 1; /* Timestamp this packet */
> - uint32_t ipsec_ops : 4; /* IPsec operation */
> -#else
> - uint32_t ipsec_ops : 4;
> - uint32_t tsflag : 1;
> - uint32_t tnl_csum : 1;
> - uint32_t transport_csum : 1;
> - uint32_t ip_csum : 1;
> - uint32_t reserved : 7;
> -#endif
> - } s;
> -};
> -
> -/** Utility function to prepare a 64B NIC instruction based on a setup command
> - * @param cmd - pointer to instruction to be filled in.
> - * @param setup - pointer to the setup structure
> - * @param q_no - which queue for back pressure
> - *
> - * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
> - */
> -static inline void
> -lio_prepare_pci_cmd(struct lio_device *lio_dev,
> - union lio_instr_64B *cmd,
> - union lio_cmd_setup *setup,
> - uint32_t tag)
> -{
> - union octeon_packet_params packet_params;
> - struct octeon_instr_pki_ih3 *pki_ih3;
> - struct octeon_instr_irh *irh;
> - struct octeon_instr_ih3 *ih3;
> - int port;
> -
> - memset(cmd, 0, sizeof(union lio_instr_64B));
> -
> - ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
> - pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
> -
> - /* assume that rflag is cleared so therefore front data will only have
> - * irh and ossp[1] and ossp[2] for a total of 24 bytes
> - */
> - ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
> - /* PKI IH */
> - ih3->fsz = OCTEON_PCI_CMD_O3;
> -
> - if (!setup->s.gather) {
> - ih3->dlengsz = setup->s.u.datasize;
> - } else {
> - ih3->gather = 1;
> - ih3->dlengsz = setup->s.u.gatherptrs;
> - }
> -
> - pki_ih3->w = 1;
> - pki_ih3->raw = 0;
> - pki_ih3->utag = 0;
> - pki_ih3->utt = 1;
> - pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
> -
> - port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
> -
> - if (tag)
> - pki_ih3->tag = tag;
> - else
> - pki_ih3->tag = LIO_DATA(port);
> -
> - pki_ih3->tagtype = OCTEON_ORDERED_TAG;
> - pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
> - pki_ih3->pm = 0x0; /* parse from L2 */
> - pki_ih3->sl = 32; /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
> -
> - irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
> -
> - irh->opcode = LIO_OPCODE;
> - irh->subcode = LIO_OPCODE_NW_DATA;
> -
> - packet_params.pkt_params32 = 0;
> - packet_params.s.ip_csum = setup->s.ip_csum;
> - packet_params.s.transport_csum = setup->s.transport_csum;
> - packet_params.s.tnl_csum = setup->s.tnl_csum;
> - packet_params.s.tsflag = setup->s.timestamp;
> -
> - irh->ossp = packet_params.pkt_params32;
> -}
> -
> -int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
> -void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
> -
> -struct lio_soft_command *
> -lio_alloc_soft_command(struct lio_device *lio_dev,
> - uint32_t datasize, uint32_t rdatasize,
> - uint32_t ctxsize);
> -void lio_prepare_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc,
> - uint8_t opcode, uint8_t subcode,
> - uint32_t irh_ossp, uint64_t ossp0,
> - uint64_t ossp1);
> -int lio_send_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc);
> -void lio_free_soft_command(struct lio_soft_command *sc);
> -
> -/** Send control packet to the device
> - * @param lio_dev - lio device pointer
> - * @param nctrl - control structure with command, timeout, and callback info
> - *
> - * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
> - * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
> - */
> -int lio_send_ctrl_pkt(struct lio_device *lio_dev,
> - struct lio_ctrl_pkt *ctrl_pkt);
> -
> -/** Maximum ordered requests to process in every invocation of
> - * lio_process_ordered_list(). The function will continue to process requests
> - * as long as it can find one that has finished processing. If it keeps
> - * finding requests that have completed, the function can run for ever. The
> - * value defined here sets an upper limit on the number of requests it can
> - * process before it returns control to the poll thread.
> - */
> -#define LIO_MAX_ORD_REQS_TO_PROCESS 4096
> -
> -/** Error codes used in Octeon Host-Core communication.
> - *
> - * 31 16 15 0
> - * ----------------------------
> - * | | |
> - * ----------------------------
> - * Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
> - * are reserved to identify the group to which the error code belongs. The
> - * lower 16-bits, called Minor Error Number, carry the actual code.
> - *
> - * So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
> - */
> -/** Status for a request.
> - * If the request is successfully queued, the driver will return
> - * a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
> - * the driver if the response for request failed to arrive before a
> - * time-out period or if the request processing * got interrupted due to
> - * a signal respectively.
> - */
> -enum {
> - /** A value of 0x00000000 indicates no error i.e. success */
> - LIO_REQUEST_DONE = 0x00000000,
> - /** (Major number: 0x0000; Minor Number: 0x0001) */
> - LIO_REQUEST_PENDING = 0x00000001,
> - LIO_REQUEST_TIMEOUT = 0x00000003,
> -
> -};
> -
> -/*------ Error codes used by firmware (bits 15..0 set by firmware */
> -#define LIO_FIRMWARE_MAJOR_ERROR_CODE 0x0001
> -#define LIO_FIRMWARE_STATUS_CODE(status) \
> - ((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
> -
> -/** Initialize the response lists. The number of response lists to create is
> - * given by count.
> - * @param lio_dev - the lio device structure.
> - */
> -void lio_setup_response_list(struct lio_device *lio_dev);
> -
> -/** Check the status of first entry in the ordered list. If the instruction at
> - * that entry finished processing or has timed-out, the entry is cleaned.
> - * @param lio_dev - the lio device structure.
> - * @return 1 if the ordered list is empty, 0 otherwise.
> - */
> -int lio_process_ordered_list(struct lio_device *lio_dev);
> -
> -#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count) \
> - (((lio_dev)->instr_queue[iq_no]->stats.field) += count)
> -
> -static inline void
> -lio_swap_8B_data(uint64_t *data, uint32_t blocks)
> -{
> - while (blocks) {
> - *data = rte_cpu_to_be_64(*data);
> - blocks--;
> - data++;
> - }
> -}
> -
> -static inline uint64_t
> -lio_map_ring(void *buf)
> -{
> - rte_iova_t dma_addr;
> -
> - dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
> -
> - return (uint64_t)dma_addr;
> -}
> -
> -static inline uint64_t
> -lio_map_ring_info(struct lio_droq *droq, uint32_t i)
> -{
> - rte_iova_t dma_addr;
> -
> - dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
> -
> - return (uint64_t)dma_addr;
> -}
> -
> -static inline int
> -lio_opcode_slow_path(union octeon_rh *rh)
> -{
> - uint16_t subcode1, subcode2;
> -
> - subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
> - subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
> -
> - return subcode2 != subcode1;
> -}
> -
> -static inline void
> -lio_add_sg_size(struct lio_sg_entry *sg_entry,
> - uint16_t size, uint32_t pos)
> -{
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - sg_entry->u.size[pos] = size;
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - sg_entry->u.size[3 - pos] = size;
> -#endif
> -}
> -
> -/* Macro to increment index.
> - * Index is incremented by count; if the sum exceeds
> - * max, index is wrapped-around to the start.
> - */
> -static inline uint32_t
> -lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
> -{
> - if ((index + count) >= max)
> - index = index + count - max;
> - else
> - index += count;
> -
> - return index;
> -}
> -
> -int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
> - int desc_size, struct rte_mempool *mpool,
> - unsigned int socket_id);
> -uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> - uint16_t budget);
> -void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
> -
> -void lio_delete_sglist(struct lio_instr_queue *txq);
> -int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
> - int fw_mapped_iq, int num_descs, unsigned int socket_id);
> -uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
> - uint16_t nb_pkts);
> -int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
> -int lio_setup_iq(struct lio_device *lio_dev, int q_index,
> - union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
> - unsigned int socket_id);
> -int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
> -void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
> -/** Setup instruction queue zero for the device
> - * @param lio_dev which lio device to setup
> - *
> - * @return 0 if success. -1 if fails
> - */
> -int lio_setup_instr_queue0(struct lio_device *lio_dev);
> -void lio_free_instr_queue0(struct lio_device *lio_dev);
> -void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
> -#endif /* _LIO_RXTX_H_ */
> diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
> deleted file mode 100644
> index 10270c560e..0000000000
> --- a/drivers/net/liquidio/lio_struct.h
> +++ /dev/null
> @@ -1,661 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_STRUCT_H_
> -#define _LIO_STRUCT_H_
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -#include <sys/queue.h>
> -
> -#include <rte_spinlock.h>
> -#include <rte_atomic.h>
> -
> -#include "lio_hw_defs.h"
> -
> -struct lio_stailq_node {
> - STAILQ_ENTRY(lio_stailq_node) entries;
> -};
> -
> -STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
> -
> -struct lio_version {
> - uint16_t major;
> - uint16_t minor;
> - uint16_t micro;
> - uint16_t reserved;
> -};
> -
> -/** Input Queue statistics. Each input queue has four stats fields. */
> -struct lio_iq_stats {
> - uint64_t instr_posted; /**< Instructions posted to this queue. */
> - uint64_t instr_processed; /**< Instructions processed in this queue. */
> - uint64_t instr_dropped; /**< Instructions that could not be processed */
> - uint64_t bytes_sent; /**< Bytes sent through this queue. */
> - uint64_t tx_done; /**< Num of packets sent to network. */
> - uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
> - uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
> - uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
> -};
> -
> -/** Output Queue statistics. Each output queue has four stats fields. */
> -struct lio_droq_stats {
> - /** Number of packets received in this queue. */
> - uint64_t pkts_received;
> -
> - /** Bytes received by this queue. */
> - uint64_t bytes_received;
> -
> - /** Packets dropped due to no memory available. */
> - uint64_t dropped_nomem;
> -
> - /** Packets dropped due to large number of pkts to process. */
> - uint64_t dropped_toomany;
> -
> - /** Number of packets sent to stack from this queue. */
> - uint64_t rx_pkts_received;
> -
> - /** Number of Bytes sent to stack from this queue. */
> - uint64_t rx_bytes_received;
> -
> - /** Num of Packets dropped due to receive path failures. */
> - uint64_t rx_dropped;
> -
> - /** Num of vxlan packets received; */
> - uint64_t rx_vxlan;
> -
> - /** Num of failures of rte_pktmbuf_alloc() */
> - uint64_t rx_alloc_failure;
> -
> -};
> -
> -/** The Descriptor Ring Output Queue structure.
> - * This structure has all the information required to implement a
> - * DROQ.
> - */
> -struct lio_droq {
> - /** A spinlock to protect access to this ring. */
> - rte_spinlock_t lock;
> -
> - uint32_t q_no;
> -
> - uint32_t pkt_count;
> -
> - struct lio_device *lio_dev;
> -
> - /** The 8B aligned descriptor ring starts at this address. */
> - struct lio_droq_desc *desc_ring;
> -
> - /** Index in the ring where the driver should read the next packet */
> - uint32_t read_idx;
> -
> - /** Index in the ring where Octeon will write the next packet */
> - uint32_t write_idx;
> -
> - /** Index in the ring where the driver will refill the descriptor's
> - * buffer
> - */
> - uint32_t refill_idx;
> -
> - /** Packets pending to be processed */
> - rte_atomic64_t pkts_pending;
> -
> - /** Number of descriptors in this ring. */
> - uint32_t nb_desc;
> -
> - /** The number of descriptors pending refill. */
> - uint32_t refill_count;
> -
> - uint32_t refill_threshold;
> -
> - /** The 8B aligned info ptrs begin from this address. */
> - struct lio_droq_info *info_list;
> -
> - /** The receive buffer list. This list has the virtual addresses of the
> - * buffers.
> - */
> - struct lio_recv_buffer *recv_buf_list;
> -
> - /** The size of each buffer pointed by the buffer pointer. */
> - uint32_t buffer_size;
> -
> - /** Pointer to the mapped packet credit register.
> - * Host writes number of info/buffer ptrs available to this register
> - */
> - void *pkts_credit_reg;
> -
> - /** Pointer to the mapped packet sent register.
> - * Octeon writes the number of packets DMA'ed to host memory
> - * in this register.
> - */
> - void *pkts_sent_reg;
> -
> - /** Statistics for this DROQ. */
> - struct lio_droq_stats stats;
> -
> - /** DMA mapped address of the DROQ descriptor ring. */
> - size_t desc_ring_dma;
> -
> - /** Info ptr list are allocated at this virtual address. */
> - size_t info_base_addr;
> -
> - /** DMA mapped address of the info list */
> - size_t info_list_dma;
> -
> - /** Allocated size of info list. */
> - uint32_t info_alloc_size;
> -
> - /** Memory zone **/
> - const struct rte_memzone *desc_ring_mz;
> - const struct rte_memzone *info_mz;
> - struct rte_mempool *mpool;
> -};
> -
> -/** Receive Header */
> -union octeon_rh {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t rh64;
> - struct {
> - uint64_t opcode : 4;
> - uint64_t subcode : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t reserved : 17;
> - uint64_t ossp : 32; /** opcode/subcode specific parameters */
> - } r;
> - struct {
> - uint64_t opcode : 4;
> - uint64_t subcode : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t extra : 28;
> - uint64_t vlan : 12;
> - uint64_t priority : 3;
> - uint64_t csum_verified : 3; /** checksum verified. */
> - uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
> - uint64_t encap_on : 1;
> - uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
> - } r_dh;
> - struct {
> - uint64_t opcode : 4;
> - uint64_t subcode : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t reserved : 8;
> - uint64_t extra : 25;
> - uint64_t gmxport : 16;
> - } r_nic_info;
> -#else
> - uint64_t rh64;
> - struct {
> - uint64_t ossp : 32; /** opcode/subcode specific parameters */
> - uint64_t reserved : 17;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t subcode : 8;
> - uint64_t opcode : 4;
> - } r;
> - struct {
> - uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
> - uint64_t encap_on : 1;
> - uint64_t has_hwtstamp : 1; /** 1 = has hwtstamp */
> - uint64_t csum_verified : 3; /** checksum verified. */
> - uint64_t priority : 3;
> - uint64_t vlan : 12;
> - uint64_t extra : 28;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t subcode : 8;
> - uint64_t opcode : 4;
> - } r_dh;
> - struct {
> - uint64_t gmxport : 16;
> - uint64_t extra : 25;
> - uint64_t reserved : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t subcode : 8;
> - uint64_t opcode : 4;
> - } r_nic_info;
> -#endif
> -};
> -
> -#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
> -
> -/** The txpciq info passed to host from the firmware */
> -union octeon_txpciq {
> - uint64_t txpciq64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t q_no : 8;
> - uint64_t port : 8;
> - uint64_t pkind : 6;
> - uint64_t use_qpg : 1;
> - uint64_t qpg : 11;
> - uint64_t aura_num : 10;
> - uint64_t reserved : 20;
> -#else
> - uint64_t reserved : 20;
> - uint64_t aura_num : 10;
> - uint64_t qpg : 11;
> - uint64_t use_qpg : 1;
> - uint64_t pkind : 6;
> - uint64_t port : 8;
> - uint64_t q_no : 8;
> -#endif
> - } s;
> -};
> -
> -/** The instruction (input) queue.
> - * The input queue is used to post raw (instruction) mode data or packet
> - * data to Octeon device from the host. Each input queue for
> - * a LIO device has one such structure to represent it.
> - */
> -struct lio_instr_queue {
> - /** A spinlock to protect access to the input ring. */
> - rte_spinlock_t lock;
> -
> - rte_spinlock_t post_lock;
> -
> - struct lio_device *lio_dev;
> -
> - uint32_t pkt_in_done;
> -
> - rte_atomic64_t iq_flush_running;
> -
> - /** Flag that indicates if the queue uses 64 byte commands. */
> - uint32_t iqcmd_64B:1;
> -
> - /** Queue info. */
> - union octeon_txpciq txpciq;
> -
> - uint32_t rsvd:17;
> -
> - uint32_t status:8;
> -
> - /** Number of descriptors in this ring. */
> - uint32_t nb_desc;
> -
> - /** Index in input ring where the driver should write the next packet */
> - uint32_t host_write_index;
> -
> - /** Index in input ring where Octeon is expected to read the next
> - * packet.
> - */
> - uint32_t lio_read_index;
> -
> - /** This index aids in finding the window in the queue where Octeon
> - * has read the commands.
> - */
> - uint32_t flush_index;
> -
> - /** This field keeps track of the instructions pending in this queue. */
> - rte_atomic64_t instr_pending;
> -
> - /** Pointer to the Virtual Base addr of the input ring. */
> - uint8_t *base_addr;
> -
> - struct lio_request_list *request_list;
> -
> - /** Octeon doorbell register for the ring. */
> - void *doorbell_reg;
> -
> - /** Octeon instruction count register for this ring. */
> - void *inst_cnt_reg;
> -
> - /** Number of instructions pending to be posted to Octeon. */
> - uint32_t fill_cnt;
> -
> - /** Statistics for this input queue. */
> - struct lio_iq_stats stats;
> -
> - /** DMA mapped base address of the input descriptor ring. */
> - uint64_t base_addr_dma;
> -
> - /** Application context */
> - void *app_ctx;
> -
> - /* network stack queue index */
> - int q_index;
> -
> - /* Memory zone */
> - const struct rte_memzone *iq_mz;
> -};
> -
> -/** This structure is used by driver to store information required
> - * to free the mbuff when the packet has been fetched by Octeon.
> - * Bytes offset below assume worst-case of a 64-bit system.
> - */
> -struct lio_buf_free_info {
> - /** Bytes 1-8. Pointer to network device private structure. */
> - struct lio_device *lio_dev;
> -
> - /** Bytes 9-16. Pointer to mbuff. */
> - struct rte_mbuf *mbuf;
> -
> - /** Bytes 17-24. Pointer to gather list. */
> - struct lio_gather *g;
> -
> - /** Bytes 25-32. Physical address of mbuf->data or gather list. */
> - uint64_t dptr;
> -
> - /** Bytes 33-47. Piggybacked soft command, if any */
> - struct lio_soft_command *sc;
> -
> - /** Bytes 48-63. iq no */
> - uint64_t iq_no;
> -};
> -
> -/* The Scatter-Gather List Entry. The scatter or gather component used with
> - * input instruction has this format.
> - */
> -struct lio_sg_entry {
> - /** The first 64 bit gives the size of data in each dptr. */
> - union {
> - uint16_t size[4];
> - uint64_t size64;
> - } u;
> -
> - /** The 4 dptr pointers for this entry. */
> - uint64_t ptr[4];
> -};
> -
> -#define LIO_SG_ENTRY_SIZE (sizeof(struct lio_sg_entry))
> -
> -/** Structure of a node in list of gather components maintained by
> - * driver for each network device.
> - */
> -struct lio_gather {
> - /** List manipulation. Next and prev pointers. */
> - struct lio_stailq_node list;
> -
> - /** Size of the gather component at sg in bytes. */
> - int sg_size;
> -
> - /** Number of bytes that sg was adjusted to make it 8B-aligned. */
> - int adjust;
> -
> - /** Gather component that can accommodate max sized fragment list
> - * received from the IP layer.
> - */
> - struct lio_sg_entry *sg;
> -};
> -
> -struct lio_rss_ctx {
> - uint16_t hash_key_size;
> - uint8_t hash_key[LIO_RSS_MAX_KEY_SZ];
> - /* Ideally a factor of number of queues */
> - uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
> - uint8_t itable_size;
> - uint8_t ip;
> - uint8_t tcp_hash;
> - uint8_t ipv6;
> - uint8_t ipv6_tcp_hash;
> - uint8_t ipv6_ex;
> - uint8_t ipv6_tcp_ex_hash;
> - uint8_t hash_disable;
> -};
> -
> -struct lio_io_enable {
> - uint64_t iq;
> - uint64_t oq;
> - uint64_t iq64B;
> -};
> -
> -struct lio_fn_list {
> - void (*setup_iq_regs)(struct lio_device *, uint32_t);
> - void (*setup_oq_regs)(struct lio_device *, uint32_t);
> -
> - int (*setup_mbox)(struct lio_device *);
> - void (*free_mbox)(struct lio_device *);
> -
> - int (*setup_device_regs)(struct lio_device *);
> - int (*enable_io_queues)(struct lio_device *);
> - void (*disable_io_queues)(struct lio_device *);
> -};
> -
> -struct lio_pf_vf_hs_word {
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - /** PKIND value assigned for the DPI interface */
> - uint64_t pkind : 8;
> -
> - /** OCTEON core clock multiplier */
> - uint64_t core_tics_per_us : 16;
> -
> - /** OCTEON coprocessor clock multiplier */
> - uint64_t coproc_tics_per_us : 16;
> -
> - /** app that currently running on OCTEON */
> - uint64_t app_mode : 8;
> -
> - /** RESERVED */
> - uint64_t reserved : 16;
> -
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> - /** RESERVED */
> - uint64_t reserved : 16;
> -
> - /** app that currently running on OCTEON */
> - uint64_t app_mode : 8;
> -
> - /** OCTEON coprocessor clock multiplier */
> - uint64_t coproc_tics_per_us : 16;
> -
> - /** OCTEON core clock multiplier */
> - uint64_t core_tics_per_us : 16;
> -
> - /** PKIND value assigned for the DPI interface */
> - uint64_t pkind : 8;
> -#endif
> -};
> -
> -struct lio_sriov_info {
> - /** Number of rings assigned to VF */
> - uint32_t rings_per_vf;
> -
> - /** Number of VF devices enabled */
> - uint32_t num_vfs;
> -};
> -
> -/* Head of a response list */
> -struct lio_response_list {
> - /** List structure to add delete pending entries to */
> - struct lio_stailq_head head;
> -
> - /** A lock for this response list */
> - rte_spinlock_t lock;
> -
> - rte_atomic64_t pending_req_count;
> -};
> -
> -/* Structure to define the configuration attributes for each Input queue. */
> -struct lio_iq_config {
> - /* Max number of IQs available */
> - uint8_t max_iqs;
> -
> - /** Pending list size (usually set to the sum of the size of all Input
> - * queues)
> - */
> - uint32_t pending_list_size;
> -
> - /** Command size - 32 or 64 bytes */
> - uint32_t instr_type;
> -};
> -
> -/* Structure to define the configuration attributes for each Output queue. */
> -struct lio_oq_config {
> - /* Max number of OQs available */
> - uint8_t max_oqs;
> -
> - /** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
> - uint32_t info_ptr;
> -
> - /** The number of buffers that were consumed during packet processing by
> - * the driver on this Output queue before the driver attempts to
> - * replenish the descriptor ring with new buffers.
> - */
> - uint32_t refill_threshold;
> -};
> -
> -/* Structure to define the configuration. */
> -struct lio_config {
> - uint16_t card_type;
> - const char *card_name;
> -
> - /** Input Queue attributes. */
> - struct lio_iq_config iq;
> -
> - /** Output Queue attributes. */
> - struct lio_oq_config oq;
> -
> - int num_nic_ports;
> -
> - int num_def_tx_descs;
> -
> - /* Num of desc for rx rings */
> - int num_def_rx_descs;
> -
> - int def_rx_buf_size;
> -};
> -
> -/** Status of a RGMII Link on Octeon as seen by core driver. */
> -union octeon_link_status {
> - uint64_t link_status64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t duplex : 8;
> - uint64_t mtu : 16;
> - uint64_t speed : 16;
> - uint64_t link_up : 1;
> - uint64_t autoneg : 1;
> - uint64_t if_mode : 5;
> - uint64_t pause : 1;
> - uint64_t flashing : 1;
> - uint64_t reserved : 15;
> -#else
> - uint64_t reserved : 15;
> - uint64_t flashing : 1;
> - uint64_t pause : 1;
> - uint64_t if_mode : 5;
> - uint64_t autoneg : 1;
> - uint64_t link_up : 1;
> - uint64_t speed : 16;
> - uint64_t mtu : 16;
> - uint64_t duplex : 8;
> -#endif
> - } s;
> -};
> -
> -/** The rxpciq info passed to host from the firmware */
> -union octeon_rxpciq {
> - uint64_t rxpciq64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t q_no : 8;
> - uint64_t reserved : 56;
> -#else
> - uint64_t reserved : 56;
> - uint64_t q_no : 8;
> -#endif
> - } s;
> -};
> -
> -/** Information for a OCTEON ethernet interface shared between core & host. */
> -struct octeon_link_info {
> - union octeon_link_status link;
> - uint64_t hw_addr;
> -
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t gmxport : 16;
> - uint64_t macaddr_is_admin_assigned : 1;
> - uint64_t vlan_is_admin_assigned : 1;
> - uint64_t rsvd : 30;
> - uint64_t num_txpciq : 8;
> - uint64_t num_rxpciq : 8;
> -#else
> - uint64_t num_rxpciq : 8;
> - uint64_t num_txpciq : 8;
> - uint64_t rsvd : 30;
> - uint64_t vlan_is_admin_assigned : 1;
> - uint64_t macaddr_is_admin_assigned : 1;
> - uint64_t gmxport : 16;
> -#endif
> -
> - union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
> - union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
> -};
> -
> -/* ----------------------- THE LIO DEVICE --------------------------- */
> -/** The lio device.
> - * Each lio device has this structure to represent all its
> - * components.
> - */
> -struct lio_device {
> - /** PCI device pointer */
> - struct rte_pci_device *pci_dev;
> -
> - /** Octeon Chip type */
> - uint16_t chip_id;
> - uint16_t pf_num;
> - uint16_t vf_num;
> -
> - /** This device's PCIe port used for traffic. */
> - uint16_t pcie_port;
> -
> - /** The state of this device */
> - rte_atomic64_t status;
> -
> - uint8_t intf_open;
> -
> - struct octeon_link_info linfo;
> -
> - uint8_t *hw_addr;
> -
> - struct lio_fn_list fn_list;
> -
> - uint32_t num_iqs;
> -
> - /** Guards each glist */
> - rte_spinlock_t *glist_lock;
> - /** Array of gather component linked lists */
> - struct lio_stailq_head *glist_head;
> -
> - /* The pool containing pre allocated buffers used for soft commands */
> - struct rte_mempool *sc_buf_pool;
> -
> - /** The input instruction queues */
> - struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
> -
> - /** The singly-linked tail queues of instruction response */
> - struct lio_response_list response_list;
> -
> - uint32_t num_oqs;
> -
> - /** The DROQ output queues */
> - struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
> -
> - struct lio_io_enable io_qmask;
> -
> - struct lio_sriov_info sriov_info;
> -
> - struct lio_pf_vf_hs_word pfvf_hsword;
> -
> - /** Mail Box details of each lio queue. */
> - struct lio_mbox **mbox;
> -
> - char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
> -
> - const struct lio_config *default_config;
> -
> - struct rte_eth_dev *eth_dev;
> -
> - uint64_t ifflags;
> - uint8_t max_rx_queues;
> - uint8_t max_tx_queues;
> - uint8_t nb_rx_queues;
> - uint8_t nb_tx_queues;
> - uint8_t port_configured;
> - struct lio_rss_ctx rss_state;
> - uint16_t port_id;
> - char firmware_version[LIO_FW_VERSION_LENGTH];
> -};
> -#endif /* _LIO_STRUCT_H_ */
> diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
> deleted file mode 100644
> index ebadbf3dea..0000000000
> --- a/drivers/net/liquidio/meson.build
> +++ /dev/null
> @@ -1,16 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2018 Intel Corporation
> -
> -if is_windows
> - build = false
> - reason = 'not supported on Windows'
> - subdir_done()
> -endif
> -
> -sources = files(
> - 'base/lio_23xx_vf.c',
> - 'base/lio_mbox.c',
> - 'lio_ethdev.c',
> - 'lio_rxtx.c',
> -)
> -includes += include_directories('base')
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index b1df17ce8c..f68bbc27a7 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -36,7 +36,6 @@ drivers = [
> 'ipn3ke',
> 'ixgbe',
> 'kni',
> - 'liquidio',
> 'mana',
> 'memif',
> 'mlx4',
> --
> 2.40.1
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-05-17 15:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-28 10:31 [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver jerinj
2023-05-02 14:18 ` Ferruh Yigit
2023-05-08 13:44 ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
2023-05-17 15:47 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).