* [PATCH v4] net/zxdh: Provided zxdh basic init
@ 2024-09-10 12:00 Junlong Wang
2024-09-24 1:35 ` [v4] " Junlong Wang
` (4 more replies)
0 siblings, 5 replies; 76+ messages in thread
From: Junlong Wang @ 2024-09-10 12:00 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 150990 bytes --]
provided zxdh initialization of zxdh PMD driver.
include msg channel, np init and etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
V4: Resolve compilation issues
V3: Resolve compilation issues
V2: Resolve compilation issues and modify doc(zxdh.ini zdh.rst)
V1: Provide zxdh basic init and open source NPSDK lib
---
doc/guides/nics/features/zxdh.ini | 10 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 34 +
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 23 +
drivers/net/zxdh/zxdh_common.c | 59 ++
drivers/net/zxdh/zxdh_common.h | 32 +
drivers/net/zxdh/zxdh_ethdev.c | 1328 +++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 202 +++++
drivers/net/zxdh/zxdh_logs.h | 38 +
drivers/net/zxdh/zxdh_msg.c | 1177 +++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 408 +++++++++
drivers/net/zxdh/zxdh_npsdk.c | 158 ++++
drivers/net/zxdh/zxdh_npsdk.h | 216 +++++
drivers/net/zxdh/zxdh_pci.c | 462 ++++++++++
drivers/net/zxdh/zxdh_pci.h | 259 ++++++
drivers/net/zxdh/zxdh_queue.c | 138 +++
drivers/net/zxdh/zxdh_queue.h | 85 ++
drivers/net/zxdh/zxdh_ring.h | 87 ++
drivers/net/zxdh/zxdh_rxtx.h | 48 ++
20 files changed, 4766 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
create mode 100644 drivers/net/zxdh/zxdh_logs.h
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
create mode 100644 drivers/net/zxdh/zxdh_npsdk.c
create mode 100644 drivers/net/zxdh/zxdh_npsdk.h
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.c
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_ring.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
new file mode 100644
index 0000000000..083c75511b
--- /dev/null
+++ b/doc/guides/nics/features/zxdh.ini
@@ -0,0 +1,10 @@
+;
+; Supported features of the 'zxdh' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
+
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..8e371ac4a5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -69,3 +69,4 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ zxdh
diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
new file mode 100644
index 0000000000..e878058b7b
--- /dev/null
+++ b/doc/guides/nics/zxdh.rst
@@ -0,0 +1,34 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2023 ZTE Corporation.
+
+ZXDH Poll Mode Driver
+======================
+
+The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
+for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
+the ZTE Ethernet Controller E310/E312.
+
+
+Features
+--------
+
+Features of the zxdh PMD are:
+
+- Multi arch support: x86_64, ARMv8.
+
+Prerequisites
+-------------
+
+- Learning about ZXDH NX Series Ethernet Controller NICs using
+ `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+X86-32, Power8, ARMv7 and BSD are not supported yet.
+
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b782..1a3db8a04d 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -62,6 +62,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
new file mode 100644
index 0000000000..593e3c5933
--- /dev/null
+++ b/drivers/net/zxdh/meson.build
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 ZTE Corporation
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if arch_subdir != 'x86' and arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+sources = files(
+ 'zxdh_ethdev.c',
+ 'zxdh_common.c',
+ 'zxdh_pci.c',
+ 'zxdh_msg.c',
+ 'zxdh_queue.c',
+ 'zxdh_npsdk.c',
+ )
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
new file mode 100644
index 0000000000..55497f8a24
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <ethdev_driver.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_common.h"
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ uint32_t val = *((volatile uint32_t *)(baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ *((volatile uint32_t *)(baseaddr + reg)) = val;
+}
+
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ /* check whether lock is used */
+ if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
+ return -1;
+
+ return 0;
+}
+
+int32_t zxdh_release_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
+ var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
+ zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
+ return 0;
+ }
+
+ return -1;
+}
+
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg)
+{
+ uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val)
+{
+ *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
new file mode 100644
index 0000000000..912eb9ad42
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_COMMON_H_
+#define _ZXDH_COMMON_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+#define ZXDH_VF_LOCK_ENABLE_MASK 0x1
+#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10
+#define ZXDH_VF_LOCK_REG 0x90
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
+int32_t zxdh_release_lock(struct zxdh_hw *hw);
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw);
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg);
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_COMMON_H_ */
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
new file mode 100644
index 0000000000..813ced24cd
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -0,0 +1,1328 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_interrupts.h>
+#include <eal_interrupts.h>
+#include <ethdev_pci.h>
+#include <rte_kvargs.h>
+#include <rte_hexdump.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+#include "zxdh_rxtx.h"
+#include "zxdh_ethdev.h"
+#include "zxdh_msg.h"
+#include "zxdh_npsdk.h"
+
+struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+struct zxdh_shared_data *zxdh_shared_data;
+const char *MZ_ZXDH_PMD_SHARED_DATA = "zxdh_pmd_shared_data";
+rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
+struct zxdh_dtb_shared_data g_dtb_data = {0};
+
+#define ZXDH_PMD_DEFAULT_HOST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_ORDER_PLATFORM | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA |\
+ 1ULL << ZXDH_NET_F_MAC | \
+ 1ULL << ZXDH_NET_F_CSUM |\
+ 1ULL << ZXDH_NET_F_GUEST_CSUM |\
+ 1ULL << ZXDH_NET_F_GUEST_TSO4 |\
+ 1ULL << ZXDH_NET_F_GUEST_TSO6 |\
+ 1ULL << ZXDH_NET_F_HOST_TSO4 |\
+ 1ULL << ZXDH_NET_F_HOST_TSO6 |\
+ 1ULL << ZXDH_NET_F_GUEST_UFO |\
+ 1ULL << ZXDH_NET_F_HOST_UFO)
+
+#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA | \
+ 1ULL << ZXDH_NET_F_MAC)
+
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
+
+static unsigned int
+log2above(unsigned int v)
+{
+ unsigned int l;
+ unsigned int r;
+
+ for (l = 0, r = 0; (v >> 1); ++l, v >>= 1)
+ r |= (v & 1);
+ return l + r;
+}
+
+static uint16_t zxdh_queue_desc_pre_setup(uint16_t desc)
+{
+ uint32_t nb_desc = desc;
+
+ if (desc < ZXDH_MIN_QUEUE_DEPTH) {
+ PMD_RX_LOG(WARNING,
+ "nb_desc(%u) increased number of descriptors to the min queue depth (%u)",
+ desc, ZXDH_MIN_QUEUE_DEPTH);
+ return ZXDH_MIN_QUEUE_DEPTH;
+ }
+
+ if (desc > ZXDH_MAX_QUEUE_DEPTH) {
+ PMD_RX_LOG(WARNING,
+ "nb_desc(%u) can't be greater than max_rxds (%d), turn to max queue depth",
+ desc, ZXDH_MAX_QUEUE_DEPTH);
+ return ZXDH_MAX_QUEUE_DEPTH;
+ }
+
+ if (!rte_is_power_of_2(desc)) {
+ nb_desc = 1 << log2above(desc);
+ if (nb_desc > ZXDH_MAX_QUEUE_DEPTH)
+ nb_desc = ZXDH_MAX_QUEUE_DEPTH;
+
+ PMD_RX_LOG(WARNING,
+ "nb_desc(%u) increased number of descriptors to the next power of two (%d)",
+ desc, nb_desc);
+ }
+
+ return nb_desc;
+}
+
+static int32_t hw_q_depth_handler(const char *key __rte_unused,
+ const char *value, void *ret_val)
+{
+ uint16_t val = 0;
+ struct zxdh_hw *hw = ret_val;
+
+ val = strtoul(value, NULL, 0);
+ uint16_t q_depth = zxdh_queue_desc_pre_setup(val);
+
+ hw->q_depth = q_depth;
+ return 0;
+}
+
+static int32_t zxdh_dev_devargs_parse(struct rte_devargs *devargs, struct zxdh_hw *hw)
+{
+ struct rte_kvargs *kvlist = NULL;
+ int32_t ret = 0;
+
+ if (devargs == NULL)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL) {
+ PMD_INIT_LOG(ERR, "error when parsing param");
+ return 0;
+ }
+
+ ret = rte_kvargs_process(kvlist, "q_depth", hw_q_depth_handler, hw);
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "Failed to parse q_depth");
+ goto exit;
+ }
+ if (!hw->q_depth)
+ hw->q_depth = ZXDH_MIN_QUEUE_DEPTH;
+
+exit:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static int zxdh_init_shared_data(void)
+{
+ const struct rte_memzone *mz;
+ int ret = 0;
+
+ rte_spinlock_lock(&zxdh_shared_data_lock);
+ if (zxdh_shared_data == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* Allocate shared memory. */
+ mz = rte_memzone_reserve(MZ_ZXDH_PMD_SHARED_DATA,
+ sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0);
+ if (mz == NULL) {
+ PMD_INIT_LOG(ERR, "Cannot allocate zxdh shared data");
+ ret = -rte_errno;
+ goto error;
+ }
+ zxdh_shared_data = mz->addr;
+ memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data));
+ rte_spinlock_init(&zxdh_shared_data->lock);
+ } else { /* Lookup allocated shared memory. */
+ mz = rte_memzone_lookup(MZ_ZXDH_PMD_SHARED_DATA);
+ if (mz == NULL) {
+ PMD_INIT_LOG(ERR, "Cannot attach zxdh shared data");
+ ret = -rte_errno;
+ goto error;
+ }
+ zxdh_shared_data = mz->addr;
+ }
+ }
+
+error:
+ rte_spinlock_unlock(&zxdh_shared_data_lock);
+ return ret;
+}
+
+static int zxdh_init_once(struct rte_eth_dev *eth_dev)
+{
+ PMD_INIT_LOG(DEBUG, "port 0x%x init...", eth_dev->data->port_id);
+ if (zxdh_init_shared_data())
+ return -rte_errno;
+
+ struct zxdh_shared_data *sd = zxdh_shared_data;
+ int ret = 0;
+
+ rte_spinlock_lock(&sd->lock);
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ if (!sd->init_done) {
+ ++sd->secondary_cnt;
+ sd->init_done = true;
+ }
+ goto out;
+ }
+
+ sd->dev_refcnt++;
+out:
+ rte_spinlock_unlock(&sd->lock);
+ return ret;
+}
+
+static int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
+{
+ hw->host_features = zxdh_vtpci_get_features(hw);
+ hw->host_features = ZXDH_PMD_DEFAULT_HOST_FEATURES;
+
+ uint64_t guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
+ uint64_t nego_features = guest_features & hw->host_features;
+
+ hw->guest_features = nego_features;
+
+ if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
+ &hw->mac_addr, RTE_ETHER_ADDR_LEN);
+ PMD_INIT_LOG(DEBUG, "get dev mac: %02X:%02X:%02X:%02X:%02X:%02X",
+ hw->mac_addr[0], hw->mac_addr[1],
+ hw->mac_addr[2], hw->mac_addr[3],
+ hw->mac_addr[4], hw->mac_addr[5]);
+ } else {
+ rte_eth_random_addr(&hw->mac_addr[0]);
+ PMD_INIT_LOG(DEBUG, "random dev mac: %02X:%02X:%02X:%02X:%02X:%02X",
+ hw->mac_addr[0], hw->mac_addr[1],
+ hw->mac_addr[2], hw->mac_addr[3],
+ hw->mac_addr[4], hw->mac_addr[5]);
+ }
+ uint32_t max_queue_pairs;
+
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
+ &max_queue_pairs, sizeof(max_queue_pairs));
+ PMD_INIT_LOG(DEBUG, "get max queue pairs %u", max_queue_pairs);
+ if (max_queue_pairs == 0)
+ hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
+ else
+ hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
+
+ PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs);
+
+ hw->weak_barriers = !vtpci_with_feature(hw, ZXDH_F_ORDER_PLATFORM);
+ return 0;
+}
+
+static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t i, mbuf_num = 0;
+
+ const char *type __rte_unused;
+ struct virtqueue *vq = NULL;
+ struct rte_mbuf *buf = NULL;
+ int32_t queue_type = 0;
+
+ if (hw->vqs == NULL)
+ return;
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (!vq)
+ continue;
+
+ queue_type = get_queue_type(i);
+ if (queue_type == VTNET_RQ)
+ type = "rxq";
+ else if (queue_type == VTNET_TQ)
+ type = "txq";
+ else
+ continue;
+
+ PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
+
+ while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) {
+ rte_pktmbuf_free(buf);
+ mbuf_num++;
+ }
+
+ PMD_INIT_LOG(DEBUG, "After freeing %s[%d] used and unused buf", type, i);
+ }
+
+ PMD_INIT_LOG(DEBUG, "%d mbufs freed", mbuf_num);
+}
+
+static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
+{
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = zxdh_read_pci_caps(pci_dev, hw);
+
+ if (ret) {
+ PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->vport.vport);
+ goto err;
+ }
+ zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_modern_ops;
+ zxdh_vtpci_reset(hw);
+ zxdh_get_pci_dev_config(hw);
+ if (hw->vqs) { /* not reachable? */
+ zxdh_dev_free_mbufs(eth_dev);
+ ret = zxdh_free_queues(eth_dev);
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "port 0x%x free queue failed.", hw->vport.vport);
+ goto err;
+ }
+ }
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]);
+ PMD_INIT_LOG(DEBUG, "PORT MAC: %02X:%02X:%02X:%02X:%02X:%02X",
+ eth_dev->data->mac_addrs->addr_bytes[0],
+ eth_dev->data->mac_addrs->addr_bytes[1],
+ eth_dev->data->mac_addrs->addr_bytes[2],
+ eth_dev->data->mac_addrs->addr_bytes[3],
+ eth_dev->data->mac_addrs->addr_bytes[4],
+ eth_dev->data->mac_addrs->addr_bytes[5]);
+ /* If host does not support both status and MSI-X then disable LSC */
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && (hw->use_msix != ZXDH_MSIX_NONE)) {
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+ PMD_INIT_LOG(DEBUG, "LSC enable");
+ } else {
+ eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+ }
+ return 0;
+
+err:
+ PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
+ return ret;
+}
+
+
+static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
+{
+ PMD_INIT_LOG(INFO, "queue/interrupt unbinding");
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
+ }
+}
+
+static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (rte_intr_ack(dev->intr_handle) < 0)
+ return -1;
+
+ hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
+
+ return 0;
+}
+
+
+static void zxdh_devconf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t status = 0;
+ /* Read interrupt status which clears interrupt */
+ uint8_t isr = zxdh_vtpci_isr(hw);
+
+ if (zxdh_intr_unmask(dev) < 0)
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ if (isr & ZXDH_PCI_ISR_CONFIG) {
+ /** todo provided later
+ * if (zxdh_dev_link_update(dev, 0) == 0)
+ * rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+ */
+
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS)) {
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, status),
+ &status, sizeof(status));
+ if (status & ZXDH_NET_S_ANNOUNCE)
+ zxdh_notify_peers(dev);
+ }
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_frompfvf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(INFO, "zxdh_pf2vf_intr_handler PF ");
+ zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(INFO, "zxdh_pf2vf_intr_handler VF ");
+ zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_fromriscv_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(INFO, "zxdh_risc2pf_intr_handler PF ");
+ zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(INFO, "zxdh_riscvf_intr_handler VF ");
+ zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
+{
+ PMD_INIT_LOG(ERR, "");
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
+ tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled)
+ return 0;
+
+ zxdh_intr_cb_unreg(dev);
+ if (rte_intr_disable(dev->intr_handle) < 0)
+ return -1;
+
+ hw->intr_enabled = 0;
+ return 0;
+}
+
+static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
+
+ zxdh_queues_unbind_intr(dev);
+ zxdh_intr_disable(dev);
+
+ rte_intr_efd_disable(dev->intr_handle);
+ rte_intr_vec_list_free(dev->intr_handle);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return 0;
+}
+
+static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint8_t i;
+
+ if (!hw->risc_intr) {
+ PMD_INIT_LOG(ERR, " to allocate risc_intr");
+ hw->risc_intr = rte_zmalloc("risc_intr",
+ ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
+ if (hw->risc_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
+ return -ENOMEM;
+ }
+ }
+
+ for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
+ if (dev->intr_handle->efds[i] < 0) {
+ PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ return -1;
+ }
+
+ struct rte_intr_handle *intr_handle = hw->risc_intr + i;
+
+ intr_handle->fd = dev->intr_handle->efds[i];
+ intr_handle->type = dev->intr_handle->type;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->dtb_intr) {
+ hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
+ if (hw->dtb_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
+ return -ENOMEM;
+ }
+ }
+
+ if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
+ PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return -1;
+ }
+ hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
+ hw->dtb_intr->type = dev->intr_handle->type;
+ return 0;
+}
+
+static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+ uint16_t vec;
+
+ if (!dev->data->dev_conf.intr_conf.rxq) {
+ PMD_INIT_LOG(INFO, "queue/interrupt mask, nb_rx_queues %u",
+ dev->data->nb_rx_queues);
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(INFO, "vq%d irq set 0x%x, get 0x%x",
+ i * 2, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ } else {
+ PMD_INIT_LOG(DEBUG, "queue/interrupt binding, nb_rx_queues %u",
+ dev->data->nb_rx_queues);
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], i + ZXDH_QUE_INTR_VEC_BASE);
+ PMD_INIT_LOG(INFO, "vq%d irq set %d, get %d",
+ i * 2, i + ZXDH_QUE_INTR_VEC_BASE, vec);
+ }
+ }
+ /* mask all txq intr */
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(INFO, "vq%d irq set 0x%x, get 0x%x",
+ (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ return 0;
+}
+
+/*
+ * Should be called only after device is paused.
+ */
+int32_t zxdh_inject_pkts(struct rte_eth_dev *dev, struct rte_mbuf **tx_pkts, int32_t nb_pkts)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct virtnet_tx *txvq = dev->data->tx_queues[0];
+ int32_t ret = 0;
+
+ hw->inject_pkts = tx_pkts;
+ ret = dev->tx_pkt_burst(txvq, tx_pkts, nb_pkts);
+ hw->inject_pkts = NULL;
+
+ return ret;
+}
+
+int32_t zxdh_dev_pause(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (hw->started == 0) {
+ /* Device is just stopped. */
+ return -1;
+ }
+ hw->started = 0;
+ hw->admin_status = 0;
+ /*
+ * Prevent the worker threads from touching queues to avoid contention,
+ * 1 ms should be enough for the ongoing Tx function to finish.
+ */
+ rte_delay_ms(1);
+ return 0;
+}
+
+void zxdh_notify_peers(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct virtnet_rx *rxvq = NULL;
+ struct rte_mbuf *rarp_mbuf = NULL;
+
+ if (!dev->data->rx_queues)
+ return;
+
+ rxvq = dev->data->rx_queues[0];
+ if (!rxvq)
+ return;
+
+ rarp_mbuf = rte_net_make_rarp_packet(rxvq->mpool, (struct rte_ether_addr *)hw->mac_addr);
+ if (rarp_mbuf == NULL) {
+ PMD_DRV_LOG(ERR, "failed to make RARP packet.");
+ return;
+ }
+
+ /* If virtio port just stopped, no need to send RARP */
+ rte_spinlock_lock(&hw->state_lock);
+ if (zxdh_dev_pause(dev) < 0) {
+ rte_pktmbuf_free(rarp_mbuf);
+ rte_spinlock_unlock(&hw->state_lock);
+ return;
+ }
+ zxdh_inject_pkts(dev, &rarp_mbuf, 1);
+ hw->started = 1;
+ hw->admin_status = 1;
+ rte_spinlock_unlock(&hw->state_lock);
+}
+
+static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
+
+ tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled) {
+ zxdh_intr_cb_reg(dev);
+ ret = rte_intr_enable(dev->intr_handle);
+ if (unlikely(ret))
+ PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
+
+ hw->intr_enabled = 1;
+ }
+ return ret;
+}
+
+static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t ret = 0;
+
+ if (!rte_intr_cap_multiple(dev->intr_handle)) {
+ PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
+ return -ENOTSUP;
+ }
+ zxdh_intr_release(dev);
+ uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
+
+ if (dev->data->dev_conf.intr_conf.rxq)
+ nb_efd += dev->data->nb_rx_queues;
+
+ if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
+ PMD_INIT_LOG(ERR, "Fail to create eventfd");
+ return -1;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(INFO, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
+ if (zxdh_setup_risc_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ if (zxdh_setup_dtb_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_queues_bind_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ /** DO NOT try to remove this! This function will enable msix,
+ * or QEMU will encounter SIGSEGV when DRIVER_OK is sent.
+ * And for legacy devices, this should be done before queue/vec
+ * binding to change the config size from 20 to 24, or
+ * ZXDH_MSI_QUEUE_VECTOR (22) will be ignored.
+ **/
+ if (zxdh_intr_enable(dev) < 0) {
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ return 0;
+
+free_intr_vec:
+ zxdh_intr_release(dev);
+ return ret;
+}
+
+/* dev_ops for zxdh, bare necessities for basic operation */
+static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_configure = NULL,
+ .dev_start = NULL,
+ .dev_stop = NULL,
+ .dev_close = NULL,
+
+ .rx_queue_setup = NULL,
+ .rx_queue_intr_enable = NULL,
+ .rx_queue_intr_disable = NULL,
+
+ .tx_queue_setup = NULL,
+};
+
+
+static int32_t set_rxtx_funcs(struct rte_eth_dev *eth_dev)
+{
+ /** todo later
+ * eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare;
+ */
+
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+
+ if (!vtpci_packed_queue(hw)) {
+ PMD_INIT_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id);
+ return -1;
+ }
+ if (!vtpci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) {
+ PMD_INIT_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id);
+ return -1;
+ }
+ /** todo later provided rx/tx
+ * eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed;
+ * eth_dev->rx_pkt_burst = &zxdh_recv_mergeable_pkts_packed;
+ */
+
+ return 0;
+}
+
+static void zxdh_msg_cb_reg(struct zxdh_hw *hw)
+{
+ if (hw->is_pf)
+ zxdh_bar_chan_msg_recv_register(MODULE_BAR_MSG_TO_PF, pf_recv_bar_msg);
+ else
+ zxdh_bar_chan_msg_recv_register(MODULE_BAR_MSG_TO_VF, vf_recv_bar_msg);
+}
+
+static void zxdh_priv_res_init(struct zxdh_hw *hw)
+{
+ hw->vlan_fiter = (uint64_t *)rte_malloc("vlan_filter", 64 * sizeof(uint64_t), 1);
+ memset(hw->vlan_fiter, 0, 64 * sizeof(uint64_t));
+ if (hw->is_pf)
+ hw->vfinfo = rte_zmalloc("vfinfo", ZXDH_MAX_VF * sizeof(struct vfinfo), 4);
+ else
+ hw->vfinfo = NULL;
+}
+
+static void set_vfs_pcieid(struct zxdh_hw *hw)
+{
+ if (hw->pfinfo.vf_nums > ZXDH_MAX_VF) {
+ PMD_DRV_LOG(ERR, "vf nums %u out of range", hw->pfinfo.vf_nums);
+ return;
+ }
+ if (hw->vfinfo == NULL) {
+ PMD_DRV_LOG(ERR, " vfinfo uninited");
+ return;
+ }
+
+ PMD_DRV_LOG(INFO, "vf nums %d", hw->pfinfo.vf_nums);
+ int vf_idx;
+
+ for (vf_idx = 0; vf_idx < hw->pfinfo.vf_nums; vf_idx++)
+ hw->vfinfo[vf_idx].pcieid = VF_PCIE_ID(hw->pcie_id, vf_idx);
+}
+
+
+static void zxdh_sriovinfo_init(struct zxdh_hw *hw)
+{
+ hw->pfinfo.pcieid = PF_PCIE_ID(hw->pcie_id);
+
+ if (hw->is_pf)
+ set_vfs_pcieid(hw);
+}
+
+static int zxdh_tbl_entry_offline_destroy(struct zxdh_hw *hw)
+{
+ int ret = 0;
+ uint32_t sdt_no;
+
+ if (!g_dtb_data.init_done)
+ return ret;
+
+ if (hw->is_pf) {
+ sdt_no = MK_SDT_NO(L2_ENTRY, hw->hash_search_index);
+ ret = dpp_dtb_hash_offline_delete(0, g_dtb_data.queueid, sdt_no, 0);
+ PMD_DRV_LOG(DEBUG, "%d dpp_dtb_hash_offline_delete sdt_no %d",
+ hw->port_id, sdt_no);
+ if (ret)
+ PMD_DRV_LOG(ERR, "%d dpp_dtb_hash_offline_delete sdt_no %d failed",
+ hw->port_id, sdt_no);
+
+ sdt_no = MK_SDT_NO(MC, hw->hash_search_index);
+ ret = dpp_dtb_hash_offline_delete(0, g_dtb_data.queueid, sdt_no, 0);
+ PMD_DRV_LOG(DEBUG, "%d dpp_dtb_hash_offline_delete sdt_no %d",
+ hw->port_id, sdt_no);
+ if (ret)
+ PMD_DRV_LOG(ERR, "%d dpp_dtb_hash_offline_delete sdt_no %d failed",
+ hw->port_id, sdt_no);
+ }
+ return ret;
+}
+
+static inline int zxdh_dtb_dump_res_init(struct zxdh_hw *hw __rte_unused,
+ DPP_DEV_INIT_CTRL_T *dpp_ctrl)
+{
+ int ret = 0;
+ int i;
+
+ struct zxdh_dtb_bulk_dump_info dtb_dump_baseres[] = {
+ /* eram */
+ {"zxdh_sdt_vport_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VPORT_ATT_TABLE, NULL},
+ {"zxdh_sdt_panel_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_PANEL_ATT_TABLE, NULL},
+ {"zxdh_sdt_rss_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_RSS_ATT_TABLE, NULL},
+ {"zxdh_sdt_vlan_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VLAN_ATT_TABLE, NULL},
+ /* hash */
+ {"zxdh_sdt_l2_entry_table0", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE0, NULL},
+ {"zxdh_sdt_l2_entry_table1", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE1, NULL},
+ {"zxdh_sdt_l2_entry_table2", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE2, NULL},
+ {"zxdh_sdt_l2_entry_table3", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE3, NULL},
+ {"zxdh_sdt_mc_table0", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE0, NULL},
+ {"zxdh_sdt_mc_table1", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE1, NULL},
+ {"zxdh_sdt_mc_table2", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE2, NULL},
+ {"zxdh_sdt_mc_table3", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE3, NULL},
+ };
+ for (i = 0; i < (int)RTE_DIM(dtb_dump_baseres); i++) {
+ struct zxdh_dtb_bulk_dump_info *p = dtb_dump_baseres + i;
+ const struct rte_memzone *generic_dump_mz = rte_memzone_reserve_aligned(p->mz_name,
+ p->mz_size, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+
+ if (generic_dump_mz == NULL) {
+ PMD_DRV_LOG(ERR,
+ "Cannot alloc mem for dtb tbl bulk dump, mz_name is %s, mz_size is %u",
+ p->mz_name, p->mz_size);
+ ret = -ENOMEM;
+ return ret;
+ }
+ p->mz = generic_dump_mz;
+ dpp_ctrl->dump_addr_info[i].vir_addr = generic_dump_mz->addr_64;
+ dpp_ctrl->dump_addr_info[i].phy_addr = generic_dump_mz->iova;
+ dpp_ctrl->dump_addr_info[i].sdt_no = p->sdt_no;
+ dpp_ctrl->dump_addr_info[i].size = p->mz_size;
+
+ g_dtb_data.dtb_table_bulk_dump_mz[dpp_ctrl->dump_sdt_num] = generic_dump_mz;
+ dpp_ctrl->dump_sdt_num++;
+ }
+ return ret;
+}
+
+static void dtb_data_res_free(struct zxdh_hw *hw)
+{
+ struct rte_eth_dev *dev = hw->eth_dev;
+
+ if ((g_dtb_data.init_done) && (g_dtb_data.bind_device == dev)) {
+ PMD_DRV_LOG(INFO, "%s g_dtb_data free queue %d",
+ dev->data->name, g_dtb_data.queueid);
+
+ int ret = 0;
+
+ ret = dpp_np_online_uninstall(0, dev->data->name, g_dtb_data.queueid);
+ if (ret)
+ PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name);
+
+ if (g_dtb_data.dtb_table_conf_mz) {
+ rte_memzone_free(g_dtb_data.dtb_table_conf_mz);
+ PMD_DRV_LOG(INFO, "%s free dtb_table_conf_mz ", dev->data->name);
+ g_dtb_data.dtb_table_conf_mz = NULL;
+ }
+ if (g_dtb_data.dtb_table_dump_mz) {
+ PMD_DRV_LOG(INFO, "%s free dtb_table_dump_mz ", dev->data->name);
+ rte_memzone_free(g_dtb_data.dtb_table_dump_mz);
+ g_dtb_data.dtb_table_dump_mz = NULL;
+ }
+ int i;
+
+ for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) {
+ if (g_dtb_data.dtb_table_bulk_dump_mz[i]) {
+ rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]);
+ PMD_DRV_LOG(INFO, "%s free dtb_table_bulk_dump_mz[%d]",
+ dev->data->name, i);
+ g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL;
+ }
+ }
+ g_dtb_data.init_done = 0;
+ g_dtb_data.bind_device = NULL;
+ }
+ if (zxdh_shared_data != NULL)
+ zxdh_shared_data->npsdk_init_done = 0;
+}
+
+static inline int npsdk_dtb_res_init(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (g_dtb_data.init_done) {
+ PMD_INIT_LOG(DEBUG, "DTB res already init done, dev %s no need init",
+ dev->device->name);
+ return 0;
+ }
+ g_dtb_data.queueid = INVALID_DTBQUE;
+ g_dtb_data.bind_device = dev;
+ g_dtb_data.dev_refcnt++;
+ g_dtb_data.init_done = 1;
+ DPP_DEV_INIT_CTRL_T *dpp_ctrl = rte_malloc(NULL, sizeof(*dpp_ctrl) +
+ sizeof(DPP_DTB_ADDR_INFO_T) * 256, 0);
+
+ if (dpp_ctrl == NULL) {
+ PMD_INIT_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name);
+ ret = -ENOMEM;
+ goto free_res;
+ }
+ memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(DPP_DTB_ADDR_INFO_T) * 256);
+
+ dpp_ctrl->queue_id = 0xff;
+ dpp_ctrl->vport = hw->vport.vport;
+ dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC;
+ strcpy((char *)dpp_ctrl->port_name, dev->device->name);
+ dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0];
+
+ struct bar_offset_params param = {0};
+ struct bar_offset_res res = {0};
+
+ param.pcie_id = hw->pcie_id;
+ param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
+ param.type = URI_NP;
+
+ ret = zxdh_get_bar_offset(¶m, &res);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "dev %s get npbar offset failed", dev->device->name);
+ goto free_res;
+ }
+ dpp_ctrl->np_bar_len = res.bar_length;
+ dpp_ctrl->np_bar_offset = res.bar_offset;
+ if (!g_dtb_data.dtb_table_conf_mz) {
+ const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz",
+ ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+
+ if (conf_mz == NULL) {
+ PMD_INIT_LOG(ERR,
+ "dev %s annot allocate memory for dtb table conf",
+ dev->device->name);
+ ret = -ENOMEM;
+ goto free_res;
+ }
+ dpp_ctrl->down_vir_addr = conf_mz->addr_64;
+ dpp_ctrl->down_phy_addr = conf_mz->iova;
+ g_dtb_data.dtb_table_conf_mz = conf_mz;
+ }
+ /* */
+ if (!g_dtb_data.dtb_table_dump_mz) {
+ const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz",
+ ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
+
+ if (dump_mz == NULL) {
+ PMD_INIT_LOG(ERR,
+ "dev %s Cannot allocate memory for dtb table dump",
+ dev->device->name);
+ ret = -ENOMEM;
+ goto free_res;
+ }
+ dpp_ctrl->dump_vir_addr = dump_mz->addr_64;
+ dpp_ctrl->dump_phy_addr = dump_mz->iova;
+ g_dtb_data.dtb_table_dump_mz = dump_mz;
+ }
+ /* init bulk dump */
+ zxdh_dtb_dump_res_init(hw, dpp_ctrl);
+
+ ret = dpp_host_np_init(0, dpp_ctrl);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret);
+ goto free_res;
+ }
+
+ PMD_INIT_LOG(INFO, "dev %s dpp host np init ok.dtb queue %d",
+ dev->device->name, dpp_ctrl->queue_id);
+ g_dtb_data.queueid = dpp_ctrl->queue_id;
+ free(dpp_ctrl);
+ return 0;
+
+free_res:
+ dtb_data_res_free(hw);
+ rte_free(dpp_ctrl);
+ return -ret;
+}
+
+static int32_t dpp_res_uni_init(uint32_t type)
+{
+ uint32_t ret = 0;
+ uint32_t dev_id = 0;
+ DPP_APT_HASH_RES_INIT_T HashResInit = {0};
+ DPP_APT_ERAM_RES_INIT_T EramResInit = {0};
+ DPP_APT_STAT_RES_INIT_T StatResInit = {0};
+
+ memset(&HashResInit, 0x0, sizeof(DPP_APT_HASH_RES_INIT_T));
+ memset(&EramResInit, 0x0, sizeof(DPP_APT_ERAM_RES_INIT_T));
+ memset(&StatResInit, 0x0, sizeof(DPP_APT_STAT_RES_INIT_T));
+
+ ret = dpp_apt_hash_res_get(type, &HashResInit);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s hash_res_get failed!", __func__);
+ return -1;
+ }
+ ret = dpp_apt_eram_res_get(type, &EramResInit);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s eram_res_get failed!", __func__);
+ return -1;
+ }
+ ret = dpp_apt_stat_res_get(type, &StatResInit);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s stat_res_get failed!", __func__);
+ return -1;
+ }
+ ret = dpp_apt_hash_global_res_init(dev_id);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s hash_global_res_init failed!", __func__);
+ return -1;
+ }
+
+ ret = dpp_apt_hash_func_res_init(dev_id, HashResInit.func_num, HashResInit.func_res);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s hash_func_res_init failed!", __func__);
+ return -1;
+ }
+
+ ret = dpp_apt_hash_bulk_res_init(dev_id, HashResInit.bulk_num, HashResInit.bulk_res);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s hash_bulk_res_init failed!", __func__);
+ return -1;
+ }
+ ret = dpp_apt_hash_tbl_res_init(dev_id, HashResInit.tbl_num, HashResInit.tbl_res);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s hash_tbl_res_init failed!", __func__);
+ return -1;
+ }
+ ret = dpp_apt_eram_res_init(dev_id, EramResInit.tbl_num, EramResInit.eram_res);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s eram_res_init failed!", __func__);
+ return -1;
+ }
+ ret = dpp_stat_ppu_eram_baddr_set(dev_id, StatResInit.eram_baddr);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s stat_ppu_eram_baddr_set failed!", __func__);
+ return -1;
+ }
+ ret = dpp_stat_ppu_eram_depth_set(dev_id, StatResInit.eram_depth); /* unit: 128bits */
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s stat_ppu_eram_depth_set failed!", __func__);
+ return -1;
+ }
+ ret = dpp_se_cmmu_smmu1_cfg_set(dev_id, StatResInit.ddr_baddr);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s dpp_se_cmmu_smmu1_cfg_set failed!", __func__);
+ return -1;
+ }
+ ret = dpp_stat_ppu_ddr_baddr_set(dev_id, StatResInit.ppu_ddr_offset); /* unit: 128bits */
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s stat_ppu_ddr_baddr_set failed!", __func__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static inline int npsdk_apt_res_init(struct rte_eth_dev *dev __rte_unused)
+{
+ int32_t ret = 0;
+
+ ret = dpp_res_uni_init(SE_NIC_RES_TYPE);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "init stand dpp res failed");
+ return -1;
+ }
+
+ return ret;
+}
+static int zxdh_np_init(struct rte_eth_dev *eth_dev)
+{
+ uint32_t ret = 0;
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+
+ if ((zxdh_shared_data != NULL) && zxdh_shared_data->npsdk_init_done) {
+ g_dtb_data.dev_refcnt++;
+ zxdh_tbl_entry_offline_destroy(hw);
+ PMD_DRV_LOG(DEBUG, "no need to init dtb dtb chanenl %d devref %d",
+ g_dtb_data.queueid, g_dtb_data.dev_refcnt);
+ return 0;
+ }
+
+ if (hw->is_pf) {
+ ret = npsdk_dtb_res_init(eth_dev);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "dpp apt init failed, ret:%d ", ret);
+ return -ret;
+ }
+
+ ret = npsdk_apt_res_init(eth_dev);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "dpp apt init failed, ret:%d ", ret);
+ return -ret;
+ }
+ }
+ if (zxdh_shared_data != NULL)
+ zxdh_shared_data->npsdk_init_done = 1;
+
+ return 0;
+}
+
+static void zxdh_priv_res_free(struct zxdh_hw *priv)
+{
+ rte_free(priv->vlan_fiter);
+ priv->vlan_fiter = NULL;
+ rte_free(priv->vfinfo);
+ priv->vfinfo = NULL;
+}
+
+static int zxdh_tbl_entry_destroy(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint32_t sdt_no;
+ int ret = 0;
+
+ if (!g_dtb_data.init_done)
+ return ret;
+
+ if (hw->is_pf) {
+ sdt_no = MK_SDT_NO(L2_ENTRY, hw->hash_search_index);
+ ret = dpp_dtb_hash_online_delete(0, g_dtb_data.queueid, sdt_no);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s dpp_dtb_hash_online_delete sdt_no %d failed ",
+ dev->data->name, sdt_no);
+ return -1;
+ }
+
+ sdt_no = MK_SDT_NO(MC, hw->hash_search_index);
+ ret = dpp_dtb_hash_online_delete(0, g_dtb_data.queueid, sdt_no);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s dpp_dtb_hash_online_delete sdt_no %d failed ",
+ dev->data->name, sdt_no);
+ return -1;
+ }
+ }
+ return ret;
+}
+
+static void zxdh_np_destroy(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int ret;
+
+ ret = zxdh_tbl_entry_destroy(dev);
+ if (ret)
+ return;
+
+ if ((!g_dtb_data.init_done) && (!g_dtb_data.dev_refcnt))
+ return;
+
+ if (--g_dtb_data.dev_refcnt == 0)
+ dtb_data_res_free(hw);
+
+ PMD_DRV_LOG(DEBUG, "g_dtb_data dev_refcnt %d", g_dtb_data.dev_refcnt);
+}
+
+static int32_t zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int32_t ret;
+
+ eth_dev->dev_ops = &zxdh_eth_dev_ops;
+
+ /**
+ * Primary process does the whole initialization,
+ * for secondaryprocesses, we just select the same Rx and Tx function as primary.
+ */
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ VTPCI_OPS(hw) = &zxdh_modern_ops;
+ set_rxtx_funcs(eth_dev);
+ return 0;
+ }
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
+ return -ENOMEM;
+ }
+ memset(hw, 0, sizeof(*hw));
+ ret = zxdh_dev_devargs_parse(eth_dev->device->devargs, hw);
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "dev args parse failed");
+ return -EINVAL;
+ }
+
+ hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
+ if (hw->bar_addr[0] == 0) {
+ PMD_INIT_LOG(ERR, "Bad mem resource.");
+ return -EIO;
+ }
+ hw->device_id = pci_dev->id.device_id;
+ hw->port_id = eth_dev->data->port_id;
+ hw->eth_dev = eth_dev;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ hw->is_pf = 0;
+
+ rte_spinlock_init(&hw->state_lock);
+ if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID ||
+ pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) {
+ hw->is_pf = 1;
+ hw->pfinfo.vf_nums = pci_dev->max_vfs;
+ }
+
+ /* reset device and get dev config*/
+ ret = zxdh_init_once(eth_dev);
+ if (ret != 0)
+ goto err_zxdh_init;
+
+ ret = zxdh_init_device(eth_dev);
+ if (ret < 0)
+ goto err_zxdh_init;
+
+ ret = zxdh_np_init(eth_dev);
+ if (ret)
+ goto err_zxdh_init;
+
+ zxdh_priv_res_init(hw);
+ zxdh_sriovinfo_init(hw);
+ zxdh_msg_cb_reg(hw);
+ zxdh_configure_intr(eth_dev);
+ return 0;
+
+err_zxdh_init:
+ zxdh_intr_release(eth_dev);
+ zxdh_np_destroy(eth_dev);
+ zxdh_bar_msg_chan_exit();
+ zxdh_priv_res_free(hw);
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+ return ret;
+}
+
+int32_t zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct zxdh_hw),
+ zxdh_eth_dev_init);
+}
+
+
+static int32_t zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev __rte_unused)
+{
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ return 0;
+ /** todo later
+ * zxdh_dev_close(eth_dev);
+ */
+ return 0;
+}
+
+int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int32_t ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
+
+ if (ret == -ENODEV) { /* Port has already been released by close. */
+ ret = 0;
+ }
+ return ret;
+}
+
+static const struct rte_pci_id pci_id_zxdh_map[] = {
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+static struct rte_pci_driver zxdh_pmd = {
+ .driver = {.name = "net_zxdh", },
+ .id_table = pci_id_zxdh_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = zxdh_eth_pci_probe,
+ .remove = zxdh_eth_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, DEBUG);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, DEBUG);
+
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, DEBUG);
+RTE_PMD_REGISTER_PARAM_STRING(net_zxdh,
+ "q_depth=<int>");
+
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
new file mode 100644
index 0000000000..6683ec5edc
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_ETHDEV_H_
+#define _ZXDH_ETHDEV_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include "ethdev_pci.h"
+
+extern struct zxdh_dtb_shared_data g_dtb_data;
+#define PF_PCIE_ID(pcie_id) ((pcie_id & 0xff00) | 1 << 11)
+#define VF_PCIE_ID(pcie_id, vf_idx) ((pcie_id & 0xff00) | (1 << 11) | (vf_idx & 0xff))
+
+#define ZXDH_QUEUES_NUM_MAX 256
+
+/* ZXDH PCI vendor/device ID. */
+#define PCI_VENDOR_ID_ZTE 0x1cf2
+
+#define ZXDH_E310_PF_DEVICEID 0x8061
+#define ZXDH_E310_VF_DEVICEID 0x8062
+#define ZXDH_E312_PF_DEVICEID 0x8049
+#define ZXDH_E312_VF_DEVICEID 0x8060
+
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+/* BAR definitions */
+#define ZXDH_NUM_BARS 2
+#define ZXDH_BAR0_INDEX 0
+
+#define ZXDH_MIN_QUEUE_DEPTH 1024
+#define ZXDH_MAX_QUEUE_DEPTH 32768
+
+#define ZXDH_MAX_VF 256
+
+#define ZXDH_TBL_ERAM_DUMP_SIZE (4 * 1024 * 1024)
+#define ZXDH_TBL_ZCAM_DUMP_SIZE (5 * 1024 * 1024)
+
+#define INVALID_DTBQUE 0xFFFF
+#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30
+#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024))
+#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024))
+
+/*
+ * Process dev config changed interrupt. Call the callback
+ * if link state changed, generate gratuitous RARP packet if
+ * the status indicates an ANNOUNCE.
+ */
+#define ZXDH_NET_S_LINK_UP 1 /* Link is up */
+#define ZXDH_NET_S_ANNOUNCE 2 /* Announcement is needed */
+
+struct pfinfo {
+ uint16_t pcieid;
+ uint16_t vf_nums;
+};
+struct vfinfo {
+ uint16_t vf_idx;
+ uint16_t pcieid;
+ uint16_t vport;
+ uint8_t flag;
+ uint8_t state;
+ uint8_t rsv;
+ struct rte_ether_addr mac_addr;
+ struct rte_ether_addr vf_mac[ZXDH_MAX_MAC_ADDRS];
+};
+
+union VPORT {
+ uint16_t vport;
+
+ __extension__
+ struct {
+ uint16_t vfid:8;
+ uint16_t pfid:3;
+ uint16_t vf_flag:1;
+ uint16_t epid:3;
+ uint16_t direct_flag:1;
+ };
+};
+
+struct chnl_context {
+ uint16_t valid;
+ uint16_t ph_chno;
+}; /* 4B */
+
+struct zxdh_hw {
+ uint64_t host_features;
+ uint64_t guest_features;
+ uint32_t max_queue_pairs;
+ uint16_t max_mtu;
+ uint8_t vtnet_hdr_size;
+ uint8_t vlan_strip;
+ uint8_t use_msix;
+ uint8_t intr_enabled;
+ uint8_t started;
+ uint8_t weak_barriers;
+
+ bool has_tx_offload;
+ bool has_rx_offload;
+
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+ uint16_t port_id;
+
+ uint32_t notify_off_multiplier;
+ uint32_t speed; /* link speed in MB */
+ uint32_t speed_mode; /* link speed in 1x 2x 3x */
+ uint8_t duplex;
+ uint8_t *isr;
+ uint16_t *notify_base;
+
+ struct zxdh_pci_common_cfg *common_cfg;
+ struct zxdh_net_config *dev_cfg;
+
+ uint16_t queue_num;
+ uint16_t device_id;
+
+ uint16_t pcie_id;
+ uint8_t phyport;
+ bool msg_chan_init;
+
+ uint8_t panel_id;
+ uint8_t rsv[1];
+
+ /**
+ * App management thread and virtio interrupt handler
+ * thread both can change device state,
+ * this lock is meant to avoid such a contention.
+ */
+ rte_spinlock_t state_lock;
+ struct rte_mbuf **inject_pkts;
+ struct virtqueue **vqs;
+
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+ struct rte_intr_handle *risc_intr; /* Interrupt handle of rsic_v to host */
+ struct rte_intr_handle *dtb_intr; /* Interrupt handle of rsic_v to host */
+
+ struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX];
+ union VPORT vport;
+
+ uint8_t is_pf : 1,
+ switchoffload : 1;
+ uint8_t hash_search_index;
+ uint8_t admin_status;
+
+ uint16_t vfid;
+ uint16_t q_depth;
+ uint64_t *vlan_fiter;
+ struct pfinfo pfinfo;
+ struct vfinfo *vfinfo;
+ struct rte_eth_dev *eth_dev;
+};
+
+/* Shared data between primary and secondary processes. */
+struct zxdh_shared_data {
+ rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */
+ int init_done; /* Whether primary has done initialization. */
+ unsigned int secondary_cnt; /* Number of secondary processes init'd. */
+
+ int npsdk_init_done;
+ uint32_t dev_refcnt;
+ struct zxdh_dtb_shared_data *dtb_data;
+};
+
+struct zxdh_dtb_shared_data {
+ int init_done;
+ char name[32];
+ uint16_t queueid;
+ uint16_t vport;
+ uint32_t vector;
+ const struct rte_memzone *dtb_table_conf_mz;
+ const struct rte_memzone *dtb_table_dump_mz;
+ const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT];
+ struct rte_eth_dev *bind_device;
+ uint32_t dev_refcnt;
+};
+
+struct zxdh_dtb_bulk_dump_info {
+ const char *mz_name;
+ uint32_t mz_size;
+ uint32_t sdt_no; /** <@brief sdt no 0~255 */
+ const struct rte_memzone *mz;
+};
+
+void zxdh_interrupt_handler(void *param);
+int32_t zxdh_dev_pause(struct rte_eth_dev *dev);
+int32_t zxdh_inject_pkts(struct rte_eth_dev *dev, struct rte_mbuf **tx_pkts, int32_t nb_pkts);
+void zxdh_notify_peers(struct rte_eth_dev *dev);
+
+int32_t zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev);
+int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_ETHDEV_H_ */
diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h
new file mode 100644
index 0000000000..fb9b2d452f
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_logs.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_LOGS_H_
+#define _ZXDH_LOGS_H_
+
+#include <rte_log.h>
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int32_t zxdh_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_init, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int32_t zxdh_logtype_driver;
+#define PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_driver, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int zxdh_logtype_rx;
+#define PMD_RX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_rx, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int zxdh_logtype_tx;
+#define PMD_TX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_tx, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int32_t zxdh_logtype_msg;
+#define PMD_MSG_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_msg, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+#endif /* _ZXDH_LOGS_H_ */
+
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
new file mode 100644
index 0000000000..e625cbea82
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -0,0 +1,1177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <pthread.h>
+#include <rte_cycles.h>
+#include <inttypes.h>
+#include <rte_malloc.h>
+
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+
+#define REPS_INFO_FLAG_USABLE 0x00
+#define REPS_INFO_FLAG_USED 0xa0
+
+#define BDF_ECAM(bus, devid, func) (((bus & 0xff) << 8) | (func & 0x07) | ((devid & 0x1f) << 3))
+
+/**
+ * common.ko will work in 5 scenarios
+ * 1: SCENE_HOST_IN_DPU : host in DPU card
+ * 2: SCENE_ZF_IN_DPU : zf in DPU card
+ * 3: SCENE_NIC_WITH_DDR : inic with DDR
+ * 4: SCENE_NIC_NO_DDR : inic without DDR
+ * 5: SCENE_STD_NIC : std card
+ */
+#ifdef SCENE_HOST_IN_DPU
+#define BAR_PF_NUM 31
+#define BAR_VF_NUM 1024
+#define BAR_INDEX_PF_TO_VF 1
+#define BAR_INDEX_MPF_TO_MPF 1
+#define BAR_INDEX_MPF_TO_PFVF 0xff
+#define BAR_INDEX_PFVF_TO_MPF 0xff
+#endif
+
+#ifdef SCENE_ZF_IN_DPU
+#define BAR_PF_NUM 7
+#define BAR_VF_NUM 128
+#define BAR_INDEX_PF_TO_VF 0xff
+#define BAR_INDEX_MPF_TO_MPF 1
+#define BAR_INDEX_MPF_TO_PFVF 0xff
+#define BAR_INDEX_PFVF_TO_MPF 0xff
+#endif
+
+#ifdef SCENE_NIC_WITH_DDR
+#define BAR_PF_NUM 31
+#define BAR_VF_NUM 1024
+#define BAR_INDEX_PF_TO_VF 1
+#define BAR_INDEX_MPF_TO_MPF 0xff
+#define BAR_INDEX_MPF_TO_PFVF 0xff
+#define BAR_INDEX_PFVF_TO_MPF 0xff
+#endif
+
+#ifdef SCENE_NIC_NO_DDR
+#define BAR_PF_NUM 31
+#define BAR_VF_NUM 1024
+#define BAR_INDEX_PF_TO_VF 1
+#define BAR_INDEX_MPF_TO_MPF 0xff
+#define BAR_INDEX_MPF_TO_PFVF 1
+#define BAR_INDEX_PFVF_TO_MPF 2
+#endif
+
+#ifdef SCENE_STD_NIC
+#define BAR_PF_NUM 7
+#define BAR_VF_NUM 256
+#define BAR_INDEX_PF_TO_VF 1
+#define BAR_INDEX_MPF_TO_MPF 0xff
+#define BAR_INDEX_MPF_TO_PFVF 1
+#define BAR_INDEX_PFVF_TO_MPF 2
+#endif
+
+#define SCENE_TEST
+#ifdef SCENE_TEST
+#define BAR_PF_NUM 7
+#define BAR_VF_NUM 256
+#define BAR_INDEX_PF_TO_VF 0
+#define BAR_INDEX_MPF_TO_MPF 0xff
+#define BAR_INDEX_MPF_TO_PFVF 0
+#define BAR_INDEX_PFVF_TO_MPF 0
+#endif
+
+/**
+ * 0: left 2K, 1: right 2K
+ * src/dst: TO_RISC, TO_PFVF, TO_MPF
+ * MPF: 0 0 0
+ * PF: 0 0 1
+ * VF: 0 1 1
+ **/
+#define BAR_MSG_SRC_NUM 3
+#define BAR_MSG_SRC_MPF 0
+#define BAR_MSG_SRC_PF 1
+#define BAR_MSG_SRC_VF 2
+#define BAR_MSG_SRC_ERR 0xff
+
+#define BAR_MSG_DST_NUM 3
+#define BAR_MSG_DST_RISC 0
+#define BAR_MSG_DST_MPF 2
+#define BAR_MSG_DST_PFVF 1
+#define BAR_MSG_DST_ERR 0xff
+
+#define BAR_SUBCHAN_INDEX_SEND 0
+#define BAR_SUBCHAN_INDEX_RECV 1
+#define BAR_SEQID_NUM_MAX 256
+
+#define BAR_ALIGN_WORD_MASK 0xfffffffc
+#define BAR_MSG_VALID_MASK 1
+#define BAR_MSG_VALID_OFFSET 0
+
+#define BAR_MSG_CHAN_USABLE 0
+#define BAR_MSG_CHAN_USED 1
+
+#define LOCK_TYPE_HARD (1)
+#define LOCK_TYPE_SOFT (0)
+#define BAR_INDEX_TO_RISC 0
+
+#define BAR_MSG_POL_MASK (0x10)
+#define BAR_MSG_POL_OFFSET (4)
+
+#define REPS_HEADER_LEN_OFFSET 1
+#define REPS_HEADER_PAYLOAD_OFFSET 4
+#define REPS_HEADER_REPLYED 0xff
+
+#define READ_CHECK 1
+
+uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND},
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV},
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV, BAR_SUBCHAN_INDEX_RECV}
+};
+
+uint8_t chan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {BAR_INDEX_TO_RISC, BAR_INDEX_MPF_TO_PFVF, BAR_INDEX_MPF_TO_MPF},
+ {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF},
+ {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF}
+};
+
+uint8_t lock_type_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD},
+ {LOCK_TYPE_SOFT, LOCK_TYPE_SOFT, LOCK_TYPE_HARD},
+ {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD}
+};
+
+#define PCIEID_IS_PF_MASK (0x0800)
+#define PCIEID_PF_IDX_MASK (0x0700)
+#define PCIEID_VF_IDX_MASK (0x00ff)
+#define PCIEID_EP_IDX_MASK (0x7000)
+/* PCIEID bit field offset */
+#define PCIEID_PF_IDX_OFFSET (8)
+#define PCIEID_EP_IDX_OFFSET (12)
+
+#define MAX_EP_NUM (4)
+#define PF_NUM_PER_EP (8)
+#define VF_NUM_PER_PF (32)
+
+#define MULTIPLY_BY_8(x) ((x) << 3)
+#define MULTIPLY_BY_32(x) ((x) << 5)
+#define MULTIPLY_BY_256(x) ((x) << 8)
+
+#define MAX_HARD_SPINLOCK_NUM (511)
+#define MAX_HARD_SPINLOCK_ASK_TIMES (1000)
+#define SPINLOCK_POLLING_SPAN_US (100)
+
+#define LOCK_MASTER_ID_MASK (0x8000)
+/* bar offset */
+#define BAR0_CHAN_RISC_OFFSET (0x2000)
+#define BAR0_CHAN_PFVF_OFFSET (0x3000)
+#define BAR0_SPINLOCK_OFFSET (0x4000)
+#define FW_SHRD_OFFSET (0x5000)
+#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
+
+#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET)
+#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET)
+#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET)
+#define CHAN_PFVF_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_PFVF_OFFSET)
+
+#define RSC_TBL_CONTENT_LEN_MAX (257 * 2)
+#define TBL_MSG_PRO_SUCCESS 0xaa
+
+zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[BAR_MSG_MODULE_NUM];
+
+struct dev_stat {
+ bool is_mpf_scanned;
+ bool is_res_init;
+ int16_t dev_cnt; /* probe cnt */
+};
+struct dev_stat g_dev_stat = {0};
+
+static uint8_t __bar_msg_src_index_trans(uint8_t src)
+{
+ uint8_t src_index = 0;
+
+ switch (src) {
+ case MSG_CHAN_END_MPF:
+ src_index = BAR_MSG_SRC_MPF;
+ break;
+ case MSG_CHAN_END_PF:
+ src_index = BAR_MSG_SRC_PF;
+ break;
+ case MSG_CHAN_END_VF:
+ src_index = BAR_MSG_SRC_VF;
+ break;
+ default:
+ src_index = BAR_MSG_SRC_ERR;
+ break;
+ }
+ return src_index;
+}
+
+static uint8_t __bar_msg_dst_index_trans(uint8_t dst)
+{
+ uint8_t dst_index = 0;
+
+ switch (dst) {
+ case MSG_CHAN_END_MPF:
+ dst_index = BAR_MSG_DST_MPF;
+ break;
+ case MSG_CHAN_END_PF:
+ dst_index = BAR_MSG_DST_PFVF;
+ break;
+ case MSG_CHAN_END_VF:
+ dst_index = BAR_MSG_DST_PFVF;
+ break;
+ case MSG_CHAN_END_RISC:
+ dst_index = BAR_MSG_DST_RISC;
+ break;
+ default:
+ dst_index = BAR_MSG_SRC_ERR;
+ break;
+ }
+ return dst_index;
+}
+
+struct seqid_item {
+ void *reps_addr;
+ uint16_t id;
+ uint16_t buffer_len;
+ uint16_t flag;
+};
+
+struct seqid_ring {
+ uint16_t cur_id;
+ pthread_spinlock_t lock;
+ struct seqid_item reps_info_tbl[BAR_SEQID_NUM_MAX];
+};
+struct seqid_ring g_seqid_ring = {0};
+
+static int __bar_chan_msgid_allocate(uint16_t *msgid)
+{
+ struct seqid_item *seqid_reps_info = NULL;
+
+ pthread_spin_lock(&g_seqid_ring.lock);
+ uint16_t g_id = g_seqid_ring.cur_id;
+ uint16_t count = 0;
+
+ do {
+ count++;
+ ++g_id;
+ g_id %= BAR_SEQID_NUM_MAX;
+ seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id];
+ } while ((seqid_reps_info->flag != REPS_INFO_FLAG_USABLE) && (count < BAR_SEQID_NUM_MAX));
+ int rc;
+
+ if (count >= BAR_SEQID_NUM_MAX) {
+ rc = -1;
+ goto out;
+ }
+ seqid_reps_info->flag = REPS_INFO_FLAG_USED;
+ g_seqid_ring.cur_id = g_id;
+ *msgid = g_id;
+ rc = BAR_MSG_OK;
+
+out:
+ pthread_spin_unlock(&g_seqid_ring.lock);
+ return rc;
+}
+
+static uint16_t __bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id)
+{
+ int ret = __bar_chan_msgid_allocate(msg_id);
+
+ if (ret != BAR_MSG_OK)
+ return BAR_MSG_ERR_MSGID;
+
+ PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id);
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id];
+
+ reps_info->reps_addr = result->recv_buffer;
+ reps_info->buffer_len = result->buffer_len;
+ return BAR_MSG_OK;
+}
+
+static void __bar_chan_msgid_free(uint16_t msg_id)
+{
+ struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ pthread_spin_lock(&g_seqid_ring.lock);
+ seqid_reps_info->flag = REPS_INFO_FLAG_USABLE;
+ PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id);
+ pthread_spin_unlock(&g_seqid_ring.lock);
+}
+
+static uint64_t subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id)
+{
+ return virt_addr + (2 * chan_id + subchan_id) * BAR_MSG_ADDR_CHAN_INTERVAL;
+}
+
+static uint16_t __bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr)
+{
+ uint8_t src_index = __bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = __bar_msg_dst_index_trans(in->dst);
+ uint16_t chan_id = chan_id_tbl[src_index][dst_index];
+ uint16_t subchan_id = subchan_id_tbl[src_index][dst_index];
+
+ *subchan_addr = subchan_addr_cal(in->virt_addr, chan_id, subchan_id);
+ return BAR_MSG_OK;
+}
+
+static int __bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data)
+{
+ uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32,
+ subchan_addr, algin_offset);
+ return -1;
+ }
+ *(uint32_t *)(subchan_addr + algin_offset) = data;
+ return 0;
+}
+
+static int __bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata)
+{
+ uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32,
+ subchan_addr, algin_offset);
+ return -1;
+ }
+ *pdata = *(uint32_t *)(subchan_addr + algin_offset);
+ return 0;
+}
+
+static uint16_t __bar_chan_msg_header_set(uint64_t subchan_addr, struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ __bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx));
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_chan_msg_header_get(uint64_t subchan_addr, struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ __bar_chan_reg_read(subchan_addr, idx * 4, data + idx);
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2); /* 4B unit */
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ __bar_chan_reg_write(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, *(data + ix));
+
+ /* not 4B align part */
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ for (ix = 0; ix < remain; ix++)
+ remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix);
+
+ __bar_chan_reg_write(subchan_addr, 4 * count +
+ BAR_MSG_PLAYLOAD_OFFSET, remain_data);
+ }
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ __bar_chan_reg_read(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, (data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ __bar_chan_reg_read(subchan_addr, 4 * count +
+ BAR_MSG_PLAYLOAD_OFFSET, &remain_data);
+ for (ix = 0; ix < remain; ix++)
+ *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix);
+ }
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label)
+{
+ uint32_t data;
+
+ __bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ data &= (~BAR_MSG_VALID_MASK);
+ data |= (uint32_t)valid_label;
+ __bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data);
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_msg_valid_stat_get(uint64_t subchan_addr)
+{
+ uint32_t data;
+
+ __bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ if (BAR_MSG_CHAN_USABLE == (data & BAR_MSG_VALID_MASK))
+ return BAR_MSG_CHAN_USABLE;
+
+ return BAR_MSG_CHAN_USED;
+}
+
+#if READ_CHECK
+static uint8_t temp_msg[BAR_MSG_ADDR_CHAN_INTERVAL];
+#endif
+static uint16_t __bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr,
+ uint16_t payload_len, struct bar_msg_header *msg_header)
+{
+ __bar_chan_msg_header_set(subchan_addr, msg_header);
+#if READ_CHECK
+ __bar_chan_msg_header_get(subchan_addr, (struct bar_msg_header *)temp_msg);
+#endif
+ __bar_chan_msg_payload_set(subchan_addr, (uint8_t *)(payload_addr), payload_len);
+#if READ_CHECK
+ __bar_chan_msg_payload_get(subchan_addr, temp_msg, payload_len);
+#endif
+ __bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USED);
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label)
+{
+ uint32_t data;
+
+ __bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ data &= (~(uint32_t)BAR_MSG_POL_MASK);
+ data |= ((uint32_t)label << BAR_MSG_POL_OFFSET);
+ __bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data);
+ return BAR_MSG_OK;
+}
+
+static uint16_t __bar_chan_sync_msg_reps_get(uint64_t subchan_addr,
+ uint64_t recv_buffer, uint16_t buffer_len)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t msg_id = 0;
+ uint16_t msg_len = 0;
+
+ __bar_chan_msg_header_get(subchan_addr, &msg_header);
+ msg_id = msg_header.msg_id;
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ if (reps_info->flag != REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id);
+ return BAR_MSG_ERR_REPLY;
+ }
+ msg_len = msg_header.len;
+
+ if (msg_len > buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u",
+ buffer_len, msg_len + 4);
+ return BAR_MSG_ERR_REPSBUFF_LEN;
+ }
+ uint8_t *recv_msg = (uint8_t *)recv_buffer;
+
+ __bar_chan_msg_payload_get(subchan_addr, recv_msg + REPS_HEADER_PAYLOAD_OFFSET, msg_len);
+ *(uint16_t *)(recv_msg + REPS_HEADER_LEN_OFFSET) = msg_len;
+ *recv_msg = REPS_HEADER_REPLYED; /* set reps's valid */
+ return BAR_MSG_OK;
+}
+
+static int __bar_chan_send_para_check(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result)
+{
+ if (in == NULL || result == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null para.");
+ return BAR_MSG_ERR_NULL_PARA;
+ }
+ uint8_t src_index = __bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = __bar_msg_dst_index_trans(in->dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist.");
+ return BAR_MSG_ERR_TYPE;
+ }
+ if (in->module_id >= BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ if (in->payload_addr == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null message.");
+ return BAR_MSG_ERR_BODY_NULL;
+ }
+ if (in->payload_len > BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len);
+ return BAR_MSG_ERR_LEN;
+ }
+ if (in->virt_addr == 0 || result->recv_buffer == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL.");
+ return BAR_MSG_ERR_VIRTADDR_NULL;
+ }
+ if (result->buffer_len < REPS_HEADER_PAYLOAD_OFFSET)
+ PMD_MSG_LOG(ERR,
+ "recv buffer len: %" PRIu64 " is short than mininal 4 bytes\n",
+ result->buffer_len);
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+{
+ uint16_t lock_id = 0;
+ uint16_t pf_idx = (src_pcieid & PCIEID_PF_IDX_MASK) >> PCIEID_PF_IDX_OFFSET;
+ uint16_t ep_idx = (src_pcieid & PCIEID_EP_IDX_MASK) >> PCIEID_EP_IDX_OFFSET;
+
+ switch (dst) {
+ /* msg to risc */
+ case MSG_CHAN_END_RISC:
+ lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx;
+ break;
+ /* msg to pf/vf */
+ case MSG_CHAN_END_VF:
+ case MSG_CHAN_END_PF:
+ lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx + MULTIPLY_BY_8(1 + MAX_EP_NUM);
+ break;
+ default:
+ lock_id = 0;
+ break;
+ }
+ if (lock_id >= MAX_HARD_SPINLOCK_NUM)
+ lock_id = 0;
+
+ return lock_id;
+}
+
+static uint8_t spinklock_read(uint64_t virt_lock_addr, uint32_t lock_id)
+{
+ return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id);
+}
+
+static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data)
+{
+ *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
+}
+
+static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value)
+{
+ *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value;
+}
+
+static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
+ uint64_t label_addr, uint16_t master_id)
+{
+ uint32_t lock_rd_cnt = 0;
+
+ do {
+ /* read to lock */
+ uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id);
+
+ if (spl_val == 0) {
+ label_write((uint64_t)label_addr, virt_lock_id, master_id);
+ break;
+ }
+ rte_delay_us_block(SPINLOCK_POLLING_SPAN_US);
+ lock_rd_cnt++;
+ } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES);
+ if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES)
+ return -1;
+
+ return 0;
+}
+
+static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
+{
+ label_write((uint64_t)label_addr, virt_lock_id, 0);
+ spinlock_write(virt_addr, virt_lock_id, 0);
+ return 0;
+}
+
+int pf_recv_bar_msg(void *pay_load __rte_unused,
+ uint16_t len __rte_unused,
+ void *reps_buffer __rte_unused,
+ uint16_t *reps_len __rte_unused,
+ void *eth_dev __rte_unused)
+{
+ /* todo later provided*/
+ return 0;
+}
+
+int vf_recv_bar_msg(void *pay_load __rte_unused,
+ uint16_t len __rte_unused,
+ void *reps_buffer __rte_unused,
+ uint16_t *reps_len __rte_unused,
+ void *eth_dev __rte_unused)
+{
+ /* todo later provided*/
+ return 0;
+}
+
+static int bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u\n", src_pcieid, lockid);
+ if (dst == MSG_CHAN_END_RISC)
+ ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + CHAN_RISC_LABEL_OFFSET,
+ src_pcieid | LOCK_MASTER_ID_MASK);
+ else
+ ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + CHAN_PFVF_LABEL_OFFSET,
+ src_pcieid | LOCK_MASTER_ID_MASK);
+
+ return ret;
+}
+
+static void bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u\n", src_pcieid, lockid);
+ if (dst == MSG_CHAN_END_RISC)
+ zxdh_spinlock_unlock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + CHAN_RISC_LABEL_OFFSET);
+ else
+ zxdh_spinlock_unlock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + CHAN_PFVF_LABEL_OFFSET);
+}
+/**
+ * Fun: PF init hard_spinlock addr
+ * @pcie_id: pf's pcie_id
+ * @bar_base_addr:
+ */
+int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
+{
+ int lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_RISC);
+
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + HW_LABEL_OFFSET);
+ lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_VF);
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + HW_LABEL_OFFSET);
+ return 0;
+}
+
+/**
+ * Fun: lock the channel
+ */
+pthread_spinlock_t chan_lock;
+static int bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint8_t src_index = __bar_msg_src_index_trans(src);
+ uint8_t dst_index = __bar_msg_dst_index_trans(dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.\n");
+ return BAR_MSG_ERR_TYPE;
+ }
+ uint16_t idx = lock_type_tbl[src_index][dst_index];
+
+ if (idx == LOCK_TYPE_SOFT)
+ pthread_spin_lock(&chan_lock);
+ else
+ ret = bar_hard_lock(src_pcieid, dst, virt_addr);
+
+ if (ret != 0)
+ PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.\n", src_pcieid);
+
+ return ret;
+}
+/**
+ * Fun: unlock the channel
+ */
+static int bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ uint8_t src_index = __bar_msg_src_index_trans(src);
+ uint8_t dst_index = __bar_msg_dst_index_trans(dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.\n");
+ return BAR_MSG_ERR_TYPE;
+ }
+ uint16_t idx = lock_type_tbl[src_index][dst_index];
+
+ if (idx == LOCK_TYPE_SOFT)
+ pthread_spin_unlock(&chan_lock);
+ else
+ bar_hard_unlock(src_pcieid, dst, virt_addr);
+
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result)
+{
+ uint16_t ret = __bar_chan_send_para_check(in, result);
+
+ if (ret != BAR_MSG_OK)
+ goto exit;
+
+ uint16_t seq_id;
+
+ ret = __bar_chan_save_recv_info(result, &seq_id);
+ if (ret != BAR_MSG_OK)
+ goto exit;
+
+ uint64_t subchan_addr;
+
+ __bar_chan_subchan_addr_get(in, &subchan_addr);
+ struct bar_msg_header msg_header = {0};
+
+ msg_header.sync = BAR_CHAN_MSG_SYNC;
+ msg_header.emec = in->emec;
+ msg_header.usr = 0;
+ msg_header.rsv = 0;
+ msg_header.module_id = in->module_id;
+ msg_header.len = in->payload_len;
+ msg_header.msg_id = seq_id;
+ msg_header.src_pcieid = in->src_pcieid;
+ msg_header.dst_pcieid = in->dst_pcieid;
+
+ ret = bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+ if (ret != BAR_MSG_OK) {
+ __bar_chan_msgid_free(seq_id);
+ goto exit;
+ }
+ __bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header);
+ /* wait unset valid */
+ uint32_t time_out_cnt = 0;
+ uint16_t valid;
+
+ do {
+ rte_delay_us_block(BAR_MSG_POLLING_SPAN);
+ valid = __bar_msg_valid_stat_get(subchan_addr);
+ ++time_out_cnt;
+ } while ((time_out_cnt < BAR_MSG_TIMEOUT_TH) && (valid == BAR_MSG_CHAN_USED));
+
+ if ((time_out_cnt == BAR_MSG_TIMEOUT_TH) && (valid != BAR_MSG_CHAN_USABLE)) {
+ __bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USABLE);
+ __bar_chan_msg_poltag_set(subchan_addr, 0);
+ PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out.");
+ ret = BAR_MSG_ERR_TIME_OUT;
+ } else {
+ ret = __bar_chan_sync_msg_reps_get(subchan_addr,
+ (uint64_t)result->recv_buffer, result->buffer_len);
+ }
+ __bar_chan_msgid_free(seq_id);
+ bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+
+exit:
+ return ret;
+}
+
+static uint64_t recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = __bar_msg_dst_index_trans(src_type);
+ uint8_t dst = __bar_msg_src_index_trans(dst_type);
+
+ if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+
+ return subchan_addr_cal(virt_addr, chan_id, subchan_id);
+}
+
+static uint64_t reply_addr_get(uint8_t sync, uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = __bar_msg_dst_index_trans(src_type);
+ uint8_t dst = __bar_msg_src_index_trans(dst_type);
+
+ if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+ uint64_t recv_rep_addr;
+
+ if (sync == BAR_CHAN_MSG_SYNC)
+ recv_rep_addr = subchan_addr_cal(virt_addr, chan_id, subchan_id);
+ else
+ recv_rep_addr = subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id);
+
+ return recv_rep_addr;
+}
+
+static uint16_t __bar_chan_msg_header_check(struct bar_msg_header *msg_header)
+{
+ uint8_t module_id = 0;
+ uint16_t len = 0;
+
+ if (msg_header->valid != BAR_MSG_CHAN_USED) {
+ PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used.");
+ return BAR_MSG_ERR_MODULE;
+ }
+
+ module_id = msg_header->module_id;
+ if (module_id >= (uint8_t)BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+
+ len = msg_header->len;
+ if (len > BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len);
+ return BAR_MSG_ERR_LEN;
+ }
+ if (msg_recv_func_tbl[msg_header->module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_ERR_MODULE_NOEXIST;
+ }
+ return BAR_MSG_OK;
+}
+
+static void __bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header,
+ uint8_t *reciver_buff, void *dev)
+{
+ uint8_t *reps_buffer = rte_malloc(NULL, BAR_MSG_PAYLOAD_MAX_LEN, 0);
+
+ if (reps_buffer == NULL)
+ return;
+
+ zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id];
+ uint16_t reps_len = 0;
+
+ recv_func(reciver_buff, msg_header->len, reps_buffer, &reps_len, dev);
+ msg_header->ack = BAR_CHAN_MSG_ACK;
+ msg_header->len = reps_len;
+ __bar_chan_msg_header_set(reply_addr, msg_header);
+ __bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len);
+ __bar_chan_msg_valid_set(reply_addr, BAR_MSG_CHAN_USABLE);
+ rte_free(reps_buffer);
+}
+
+static void __bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header, uint8_t *reciver_buff)
+{
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
+
+ if (reps_info->flag != REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id);
+ return;
+ }
+ if (msg_header->len > reps_info->buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u",
+ reps_info->buffer_len, msg_header->len + 4);
+ goto free_id;
+ }
+ uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr;
+
+ rte_memcpy(reps_buffer + 4, reciver_buff, msg_header->len);
+ *(uint16_t *)(reps_buffer + 1) = msg_header->len;
+ *(uint8_t *)(reps_info->reps_addr) = REPS_HEADER_REPLYED;
+
+free_id:
+ __bar_chan_msgid_free(msg_header->msg_id);
+}
+
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev)
+{
+ uint64_t recv_addr = recv_addr_get(src, dst, virt_addr);
+
+ if (recv_addr == 0) {
+ PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst);
+ return -1;
+ }
+
+ struct bar_msg_header msg_header = {0};
+
+ __bar_chan_msg_header_get(recv_addr, &msg_header);
+ uint16_t ret = __bar_chan_msg_header_check(&msg_header);
+
+ if (ret != BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret);
+ return -1;
+ }
+ uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0);
+
+ if (recved_msg == NULL) {
+ PMD_MSG_LOG(ERR, "malloc temp buff failed.");
+ return -1;
+ }
+ __bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len);
+
+ uint64_t reps_addr = reply_addr_get(msg_header.sync, src, dst, virt_addr);
+
+ if (msg_header.sync == BAR_CHAN_MSG_SYNC) {
+ __bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev);
+ goto exit;
+ }
+ __bar_chan_msg_valid_set(recv_addr, BAR_MSG_CHAN_USABLE);
+ if (msg_header.ack == BAR_CHAN_MSG_ACK) {
+ __bar_msg_ack_async_msg_proc(&msg_header, recved_msg);
+ goto exit;
+ }
+
+exit:
+ rte_free(recved_msg);
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_msg_recv_register(uint8_t module_id, zxdh_bar_chan_msg_recv_callback callback)
+{
+ if (module_id >= (uint16_t)BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "register ERR: invalid module_id: %u.", module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ if (callback == NULL) {
+ PMD_MSG_LOG(ERR, "register %s(%u) error: null callback.",
+ module_id_name(module_id), module_id);
+ return BAR_MEG_ERR_NULL_FUNC;
+ }
+ if (msg_recv_func_tbl[module_id] != NULL) {
+ PMD_MSG_LOG(ERR, "register warning, event:%s(%u) already be registered.",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_ERR_REPEAT_REGISTER;
+ }
+ msg_recv_func_tbl[module_id] = callback;
+ PMD_MSG_LOG(DEBUG, "register module: %s(%u) success.",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_msg_recv_unregister(uint8_t module_id)
+{
+ if (module_id >= (uint16_t)BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "unregister ERR: invalid module_id :%u.", module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ if (msg_recv_func_tbl[module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "unregister wanning, event: %s(%d) has already be unregistered.",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_ERR_UNGISTER;
+ }
+ msg_recv_func_tbl[module_id] = NULL;
+ PMD_MSG_LOG(DEBUG, "unregister module %s(%d) success.",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_OK;
+}
+
+static int bar_get_sum(uint8_t *ptr, uint8_t len)
+{
+ uint64_t sum = 0;
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ sum += *(ptr + idx);
+
+ return (uint16_t)sum;
+}
+
+static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len)
+{
+ if (!res || !dev)
+ return BAR_MSG_ERR_NULL;
+
+ struct tbl_msg_header tbl_msg = {
+ .type = TBL_TYPE_READ,
+ .field = field,
+ .pcieid = dev->pcie_id,
+ .slen = 0,
+ .rsv = 0,
+ };
+
+ struct zxdh_pci_bar_msg in = {0};
+
+ in.virt_addr = dev->virt_addr;
+ in.payload_addr = &tbl_msg;
+ in.payload_len = sizeof(tbl_msg);
+ in.src = dev->src_type;
+ in.dst = MSG_CHAN_END_RISC;
+ in.module_id = BAR_MODULE_TBL;
+ in.src_pcieid = dev->pcie_id;
+
+ uint8_t recv_buf[RSC_TBL_CONTENT_LEN_MAX + 8] = {0};
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = recv_buf,
+ .buffer_len = sizeof(recv_buf),
+ };
+ int ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR,
+ "send sync_msg failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret);
+ return ret;
+ }
+ struct tbl_msg_reps_header *tbl_reps =
+ (struct tbl_msg_reps_header *)(recv_buf + REPS_HEADER_PAYLOAD_OFFSET);
+
+ if (tbl_reps->check != TBL_MSG_PRO_SUCCESS) {
+ PMD_MSG_LOG(ERR,
+ "get resource_field failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret);
+ return ret;
+ }
+ *len = tbl_reps->len;
+ rte_memcpy(res,
+ (recv_buf + REPS_HEADER_PAYLOAD_OFFSET + sizeof(struct tbl_msg_reps_header)), *len);
+ return ret;
+}
+
+int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, TBL_FIELD_PNLID, &reps, &reps_len) != BAR_MSG_OK)
+ return -1;
+
+ *panel_id = reps;
+ return BAR_MSG_OK;
+}
+int zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, TBL_FIELD_HASHID, &reps, &reps_len) != BAR_MSG_OK)
+ return -1;
+
+ *hash_id = reps;
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport)
+{
+ int ret = 0;
+ uint16_t check_token = 0;
+ uint16_t sum_res = 0;
+
+ recv_addr_get(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, 0x0);
+
+ if (!_msix_para)
+ return BAR_MSG_ERR_NULL;
+
+ struct msix_msg msix_msg = {
+ .pcie_id = _msix_para->pcie_id,
+ .vector_risc = _msix_para->vector_risc,
+ .vector_pfvf = _msix_para->vector_pfvf,
+ .vector_mpf = _msix_para->vector_mpf,
+ };
+ struct zxdh_pci_bar_msg in = {
+ .virt_addr = _msix_para->virt_addr,
+ .payload_addr = &msix_msg,
+ .payload_len = sizeof(msix_msg),
+ .emec = 0,
+ .src = _msix_para->driver_type,
+ .dst = MSG_CHAN_END_RISC,
+ .module_id = BAR_MODULE_MISX,
+ .src_pcieid = _msix_para->pcie_id,
+ .dst_pcieid = 0,
+ .usr = 0,
+ };
+
+ struct bar_recv_msg recv_msg = {0};
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.msix_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.\n", sum_res, check_token);
+ return BAR_MSG_ERR_REPLY;
+ }
+ *vport = recv_msg.msix_reps.vport;
+
+ return BAR_MSG_OK;
+}
+
+int zxdh_get_bar_offset(struct bar_offset_params *paras, struct bar_offset_res *res)
+{
+ uint16_t check_token = 0;
+ uint16_t sum_res = 0;
+ int ret = 0;
+
+ if (!paras)
+ return BAR_MSG_ERR_NULL;
+
+ struct offset_get_msg send_msg = {
+ .pcie_id = paras->pcie_id,
+ .type = paras->type,
+ };
+ struct zxdh_pci_bar_msg in = {0};
+
+ in.payload_addr = &send_msg;
+ in.payload_len = sizeof(send_msg);
+ in.virt_addr = paras->virt_addr;
+ in.src = MSG_CHAN_END_PF;
+ in.dst = MSG_CHAN_END_RISC;
+ in.module_id = BAR_MODULE_OFFSET_GET;
+ in.src_pcieid = paras->pcie_id;
+
+ struct bar_recv_msg recv_msg = {0};
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+ if (ret != BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.offset_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.\n", sum_res, check_token);
+ return BAR_MSG_ERR_REPLY;
+ }
+ res->bar_offset = recv_msg.offset_reps.offset;
+ res->bar_length = recv_msg.offset_reps.length;
+ return BAR_MSG_OK;
+}
+
+int zxdh_msg_chan_init(void)
+{
+ g_dev_stat.dev_cnt++;
+ if (g_dev_stat.is_res_init)
+ return BAR_MSG_OK;
+
+ pthread_spin_init(&chan_lock, 0);
+ g_seqid_ring.cur_id = 0;
+ pthread_spin_init(&g_seqid_ring.lock, 0);
+ uint16_t seq_id;
+
+ for (seq_id = 0; seq_id < BAR_SEQID_NUM_MAX; seq_id++) {
+ struct seqid_item *reps_info = &(g_seqid_ring.reps_info_tbl[seq_id]);
+
+ reps_info->id = seq_id;
+ reps_info->flag = REPS_INFO_FLAG_USABLE;
+ }
+ g_dev_stat.is_res_init = true;
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_msg_chan_exit(void)
+{
+ if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0))
+ return BAR_MSG_OK;
+
+ g_dev_stat.is_res_init = false;
+ PMD_MSG_LOG(DEBUG, "%s exit success!", __func__);
+ return BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
new file mode 100644
index 0000000000..07c4a1b1da
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -0,0 +1,408 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_MSG_CHAN_H_
+#define _ZXDH_MSG_CHAN_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000)
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3
+
+#define BAR_MSG_POLLING_SPAN 100 /* sleep us */
+#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
+#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
+#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) /* 10s */
+
+#define BAR_CHAN_MSG_SYNC 0
+#define BAR_CHAN_MSG_ASYNC 1
+#define BAR_CHAN_MSG_NO_EMEC 0
+#define BAR_CHAN_MSG_EMEC 1
+#define BAR_CHAN_MSG_NO_ACK 0
+#define BAR_CHAN_MSG_ACK 1
+
+#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM)
+#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1
+#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1)
+#define ZXDH_QUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) /* 5 */
+#define ZXDH_QUE_INTR_VEC_NUM 256
+
+#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
+#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header))
+#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header))
+
+#define MSG_CHAN_RET_ERR_RECV_FAIL (-11)
+#define ZXDH_INDIR_RQT_SIZE 256
+#define MODULE_EEPROM_DATA_LEN 128
+
+enum BAR_MSG_RTN {
+ BAR_MSG_OK = 0,
+ BAR_MSG_ERR_MSGID,
+ BAR_MSG_ERR_NULL,
+ BAR_MSG_ERR_TYPE, /* Message type exception */
+ BAR_MSG_ERR_MODULE, /* Module ID exception */
+ BAR_MSG_ERR_BODY_NULL, /* Message body exception */
+ BAR_MSG_ERR_LEN, /* Message length exception */
+ BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
+ BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
+ BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
+ BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
+ BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
+ /**
+ * The sending interface parameter boundary structure pointer is empty
+ */
+ BAR_MSG_ERR_NULL_PARA,
+ BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
+ /**
+ * Unable to find the corresponding message processing function for this module
+ */
+ BAR_MSG_ERR_MODULE_NOEXIST,
+ /**
+ * The virtual address in the parameters passed in by the sending interface is empty
+ */
+ BAR_MSG_ERR_VIRTADDR_NULL,
+ BAR_MSG_ERR_REPLY, /* sync msg resp_error */
+ BAR_MSG_ERR_MPF_NOT_SCANNED,
+ BAR_MSG_ERR_KERNEL_READY,
+ BAR_MSG_ERR_USR_RET_ERR,
+ BAR_MSG_ERR_ERR_PCIEID,
+ BAR_MSG_ERR_SOCKET, /* netlink sockte err */
+};
+
+enum bar_module_id {
+ BAR_MODULE_DBG = 0, /* 0: debug */
+ BAR_MODULE_TBL, /* 1: resource table */
+ BAR_MODULE_MISX, /* 2: config msix */
+ BAR_MODULE_SDA, /* 3: */
+ BAR_MODULE_RDMA, /* 4: */
+ BAR_MODULE_DEMO, /* 5: channel test */
+ BAR_MODULE_SMMU, /* 6: */
+ BAR_MODULE_MAC, /* 7: mac rx/tx stats */
+ BAR_MODULE_VDPA, /* 8: vdpa live migration */
+ BAR_MODULE_VQM, /* 9: vqm live migration */
+ BAR_MODULE_NP, /* 10: vf msg callback np */
+ BAR_MODULE_VPORT, /* 11: get vport */
+ BAR_MODULE_BDF, /* 12: get bdf */
+ BAR_MODULE_RISC_READY, /* 13: */
+ BAR_MODULE_REVERSE, /* 14: byte stream reverse */
+ BAR_MDOULE_NVME, /* 15: */
+ BAR_MDOULE_NPSDK, /* 16: */
+ BAR_MODULE_NP_TODO, /* 17: */
+ MODULE_BAR_MSG_TO_PF, /* 18: */
+ MODULE_BAR_MSG_TO_VF, /* 19: */
+
+ MODULE_FLASH = 32,
+ BAR_MODULE_OFFSET_GET = 33,
+ BAR_EVENT_OVS_WITH_VCB = 36, /* ovs<-->vcb */
+
+ BAR_MSG_MODULE_NUM = 100,
+};
+
+static inline const char *module_id_name(int val)
+{
+ switch (val) {
+ case BAR_MODULE_DBG: return "BAR_MODULE_DBG";
+ case BAR_MODULE_TBL: return "BAR_MODULE_TBL";
+ case BAR_MODULE_MISX: return "BAR_MODULE_MISX";
+ case BAR_MODULE_SDA: return "BAR_MODULE_SDA";
+ case BAR_MODULE_RDMA: return "BAR_MODULE_RDMA";
+ case BAR_MODULE_DEMO: return "BAR_MODULE_DEMO";
+ case BAR_MODULE_SMMU: return "BAR_MODULE_SMMU";
+ case BAR_MODULE_MAC: return "BAR_MODULE_MAC";
+ case BAR_MODULE_VDPA: return "BAR_MODULE_VDPA";
+ case BAR_MODULE_VQM: return "BAR_MODULE_VQM";
+ case BAR_MODULE_NP: return "BAR_MODULE_NP";
+ case BAR_MODULE_VPORT: return "BAR_MODULE_VPORT";
+ case BAR_MODULE_BDF: return "BAR_MODULE_BDF";
+ case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY";
+ case BAR_MODULE_REVERSE: return "BAR_MODULE_REVERSE";
+ case BAR_MDOULE_NVME: return "BAR_MDOULE_NVME";
+ case BAR_MDOULE_NPSDK: return "BAR_MDOULE_NPSDK";
+ case BAR_MODULE_NP_TODO: return "BAR_MODULE_NP_TODO";
+ case MODULE_BAR_MSG_TO_PF: return "MODULE_BAR_MSG_TO_PF";
+ case MODULE_BAR_MSG_TO_VF: return "MODULE_BAR_MSG_TO_VF";
+ case MODULE_FLASH: return "MODULE_FLASH";
+ case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET";
+ case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB";
+ default: return "NA";
+ }
+}
+
+struct bar_msg_header {
+ uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
+ uint8_t sync : 1;
+ uint8_t emec : 1; /* emergency? */
+ uint8_t ack : 1; /* ack msg? */
+ uint8_t poll : 1;
+ uint8_t usr : 1;
+ uint8_t rsv;
+ uint16_t module_id;
+ uint16_t len;
+ uint16_t msg_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid; /* used in PF-->VF */
+}; /* 12B */
+
+struct zxdh_pci_bar_msg {
+ uint64_t virt_addr; /* bar addr */
+ void *payload_addr;
+ uint16_t payload_len;
+ uint16_t emec;
+ uint16_t src; /* refer to BAR_DRIVER_TYPE */
+ uint16_t dst; /* refer to BAR_DRIVER_TYPE */
+ uint16_t module_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid;
+ uint16_t usr;
+}; /* 32B */
+
+struct zxdh_msg_recviver_mem {
+ void *recv_buffer; /* first 4B is head, followed by payload */
+ uint64_t buffer_len;
+}; /* 16B */
+
+struct msix_msg {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+};
+/* private reps struct */
+struct bar_msix_reps {
+ uint16_t pcie_id;
+ uint16_t check;
+ uint16_t vport;
+ uint16_t rsv;
+} __rte_packed; /* 8B */
+
+struct bar_offset_reps {
+ uint16_t check;
+ uint16_t rsv;
+ uint32_t offset;
+ uint32_t length;
+} __rte_packed; /* 12B */
+
+struct bar_recv_msg {
+ /* fix 4B */
+ uint8_t reps_ok;
+ uint16_t reps_len;
+ uint8_t rsv;
+ union {
+ struct bar_msix_reps msix_reps; /* 8B */
+ struct bar_offset_reps offset_reps; /* 12B */
+ } __rte_packed;
+} __rte_packed;
+
+enum pciebar_layout_type {
+ URI_VQM = 0,
+ URI_SPINLOCK = 1,
+ URI_FWCAP = 2,
+ URI_FWSHR = 3,
+ URI_DRS_SEC = 4,
+ URI_RSV = 5,
+ URI_CTRLCH = 6,
+ URI_1588 = 7,
+ URI_QBV = 8,
+ URI_MACPCS = 9,
+ URI_RDMA = 10,
+/* DEBUG PF */
+ URI_MNP = 11,
+ URI_MSPM = 12,
+ URI_MVQM = 13,
+ URI_MDPI = 14,
+ URI_NP = 15,
+/* END DEBUG PF */
+ URI_MAX,
+};
+
+enum RES_TBL_FILED {
+ TBL_FIELD_PCIEID = 0,
+ TBL_FIELD_BDF = 1,
+ TBL_FIELD_MSGCH = 2,
+ TBL_FIELD_DATACH = 3,
+ TBL_FIELD_VPORT = 4,
+ TBL_FIELD_PNLID = 5,
+ TBL_FIELD_PHYPORT = 6,
+ TBL_FIELD_SERDES_NUM = 7,
+ TBL_FIELD_NP_PORT = 8,
+ TBL_FIELD_SPEED = 9,
+ TBL_FIELD_HASHID = 10,
+ TBL_FIELD_NON,
+};
+
+struct tbl_msg_header {
+ uint8_t type; /* r/w */
+ uint8_t field; /* which table? */
+ uint16_t pcieid;
+ uint16_t slen;
+ uint16_t rsv;
+}; /* 8B */
+struct tbl_msg_reps_header {
+ uint8_t check;
+ uint8_t rsv;
+ uint16_t len;
+}; /* 4B */
+
+enum TBL_MSG_TYPE {
+ TBL_TYPE_READ,
+ TBL_TYPE_WRITE,
+ TBL_TYPE_NON,
+};
+
+struct bar_offset_params {
+ uint64_t virt_addr; /* Bar space control space virtual address */
+ uint16_t pcie_id;
+ uint16_t type; /* Module types corresponding to PCIBAR planning */
+};
+struct bar_offset_res {
+ uint32_t bar_offset;
+ uint32_t bar_length;
+};
+
+/* vec0 : dev interrupt
+ * vec1~3: risc interrupt
+ * vec4 : dtb interrupt
+ */
+enum {
+ MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, /* 1 */
+ MSIX_FROM_MPF, /* 2 */
+ MSIX_FROM_RISCV, /* 3 */
+ MSG_VEC_NUM /* 4 */
+};
+
+enum DRIVER_TYPE {
+ MSG_CHAN_END_MPF = 0,
+ MSG_CHAN_END_PF,
+ MSG_CHAN_END_VF,
+ MSG_CHAN_END_RISC,
+};
+
+enum MSG_TYPE {
+ /* loopback test type */
+ TYPE_DEBUG = 0,
+ DST_RISCV,
+ DST_MPF,
+ DST_PF_OR_VF,
+ DST_ZF,
+ MSG_TYPE_NUM,
+};
+
+struct msg_header {
+ bool is_async;
+ enum MSG_TYPE msg_type;
+ enum bar_module_id msg_module_id;
+ uint8_t msg_priority;
+ uint16_t vport_dst;
+ uint16_t qid_dst;
+};
+
+struct zxdh_res_para {
+ uint64_t virt_addr;
+ uint16_t pcie_id;
+ uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
+};
+
+struct msix_para {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+ uint64_t virt_addr;
+ uint16_t driver_type; /* refer to DRIVER_TYPE */
+};
+
+struct offset_get_msg {
+ uint16_t pcie_id;
+ uint16_t type;
+}; /* 4B */
+
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer,
+ uint16_t *reps_len, void *dev);
+
+/**
+ * Init msg_chan_pkt in probe()
+ * @return zero for success, negative for failure
+ */
+int16_t zxdh_msg_chan_pkt_init(void);
+void zxdh_msg_chan_pkt_remove(void); /* Remove msg_chan_pkt in probe() */
+
+/**
+ * Get the offset value of the specified module
+ * @bar_offset_params: input parameter
+ * @bar_offset_res: Module offset and length
+ */
+int zxdh_get_bar_offset(struct bar_offset_params *paras, struct bar_offset_res *res);
+
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer,
+ uint16_t *reps_len, void *dev);
+
+/**
+ * Send synchronization messages through PCIE BAR space
+ * @in: Message sending information
+ * @result: Message result feedback
+ * @return: 0 successful, other failures
+ */
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result);
+
+/**
+ * PCIE BAR spatial message method, registering message reception callback
+ * @module_id: Registration module ID
+ * @callback: Pointer to the receive processing function implemented by the module
+ * @return: 0 successful, other failures
+ * Usually called during driver initialization
+ */
+int zxdh_bar_chan_msg_recv_register(uint8_t module_id, zxdh_bar_chan_msg_recv_callback callback);
+
+/**
+ * PCIE BAR spatial message method, unregistered message receiving callback
+ * @module_id: Kernel PCIE device address
+ * @return: 0 successful, other failures
+ * Called during driver uninstallation
+ */
+int zxdh_bar_chan_msg_recv_unregister(uint8_t module_id);
+
+/**
+ * Provide a message receiving interface for device driver interrupt handling functions
+ * @src: Driver type for sending interrupts
+ * @dst: Device driver's own driver type
+ * @virt_addr: The communication bar address of the device
+ * @return: 0 successful, other failures
+ */
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
+
+/**
+ * Initialize spilock and clear the hardware lock address it belongs to
+ * @pcie_id: PCIE_id of PF device
+ * @bar_base_addr: Bar0 initial base address
+ */
+int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr);
+
+int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport);
+int zxdh_msg_chan_init(void);
+int zxdh_bar_msg_chan_exit(void);
+
+int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id);
+int zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id);
+
+int pf_recv_bar_msg(void *pay_load __rte_unused,
+ uint16_t len __rte_unused,
+ void *reps_buffer __rte_unused,
+ uint16_t *reps_len __rte_unused,
+ void *eth_dev __rte_unused);
+int vf_recv_bar_msg(void *pay_load __rte_unused,
+ uint16_t len __rte_unused,
+ void *reps_buffer __rte_unused,
+ uint16_t *reps_len __rte_unused,
+ void *eth_dev __rte_unused);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_MSG_CHAN_H_ */
diff --git a/drivers/net/zxdh/zxdh_npsdk.c b/drivers/net/zxdh/zxdh_npsdk.c
new file mode 100644
index 0000000000..eec644b01e
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_npsdk.c
@@ -0,0 +1,158 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <rte_common.h>
+#include "zxdh_npsdk.h"
+
+int dpp_dtb_hash_offline_delete(uint32_t dev_id __rte_unused,
+ uint32_t queue_id __rte_unused,
+ uint32_t sdt_no __rte_unused,
+ uint32_t flush_mode __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_dtb_hash_online_delete(uint32_t dev_id __rte_unused,
+ uint32_t queue_id __rte_unused,
+ uint32_t sdt_no __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_hash_res_get(uint32_t type __rte_unused,
+ DPP_APT_HASH_RES_INIT_T *HashResInit __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_eram_res_get(uint32_t type __rte_unused,
+ DPP_APT_ERAM_RES_INIT_T *EramResInit __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_stat_res_get(uint32_t type __rte_unused,
+ DPP_APT_STAT_RES_INIT_T *StatResInit __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_hash_global_res_init(uint32_t dev_id __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_hash_func_res_init(uint32_t dev_id __rte_unused,
+ uint32_t func_num __rte_unused,
+ DPP_APT_HASH_FUNC_RES_T *HashFuncRes __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_hash_bulk_res_init(uint32_t dev_id __rte_unused,
+ uint32_t bulk_num __rte_unused,
+ DPP_APT_HASH_BULK_RES_T *BulkRes __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_hash_tbl_res_init(uint32_t dev_id __rte_unused,
+ uint32_t tbl_num __rte_unused,
+ DPP_APT_HASH_TABLE_T *HashTbl __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_apt_eram_res_init(uint32_t dev_id __rte_unused,
+ uint32_t tbl_num __rte_unused,
+ DPP_APT_ERAM_TABLE_T *EramTbl __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_stat_ppu_eram_baddr_set(uint32_t dev_id __rte_unused,
+ uint32_t ppu_eram_baddr __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+int dpp_stat_ppu_eram_depth_set(uint32_t dev_id __rte_unused,
+ uint32_t ppu_eram_depth __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+int dpp_se_cmmu_smmu1_cfg_set(uint32_t dev_id __rte_unused,
+ uint32_t base_addr __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+int dpp_stat_ppu_ddr_baddr_set(uint32_t dev_id __rte_unused,
+ uint32_t ppu_ddr_baddr __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_host_np_init(uint32_t dev_id __rte_unused,
+ DPP_DEV_INIT_CTRL_T *p_dev_init_ctrl __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+int dpp_np_online_uninstall(uint32_t dev_id __rte_unused,
+ char *port_name __rte_unused,
+ uint32_t queue_id __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_dtb_stat_ppu_cnt_get(uint32_t dev_id __rte_unused,
+ uint32_t queue_id __rte_unused,
+ STAT_CNT_MODE_E rd_mode __rte_unused,
+ uint32_t index __rte_unused,
+ uint32_t *p_data __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+int dpp_dtb_entry_get(uint32_t dev_id __rte_unused,
+ uint32_t queue_id __rte_unused,
+ DPP_DTB_USER_ENTRY_T *GetEntry __rte_unused,
+ uint32_t srh_mode __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+int dpp_dtb_table_entry_write(uint32_t dev_id __rte_unused,
+ uint32_t queue_id __rte_unused,
+ uint32_t entryNum __rte_unused,
+ DPP_DTB_USER_ENTRY_T *DownEntrys __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+int dpp_dtb_table_entry_delete(uint32_t dev_id __rte_unused,
+ uint32_t queue_id __rte_unused,
+ uint32_t entryNum __rte_unused,
+ DPP_DTB_USER_ENTRY_T *DeleteEntrys __rte_unused)
+{
+ /* todo provided later */
+ return 0;
+}
+
+
diff --git a/drivers/net/zxdh/zxdh_npsdk.h b/drivers/net/zxdh/zxdh_npsdk.h
new file mode 100644
index 0000000000..265f79d132
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_npsdk.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <stdint.h>
+
+#define DPP_PORT_NAME_MAX (32)
+#define DPP_SMMU1_READ_REG_MAX_NUM (16)
+#define DPP_DIR_TBL_BUF_MAX_NUM (DPP_SMMU1_READ_REG_MAX_NUM)
+#define DPP_ETCAM_BLOCK_NUM (8)
+#define DPP_SMMU0_LPM_AS_TBL_ID_NUM (8)
+#define SE_NIC_RES_TYPE 0
+
+#define ZXDH_SDT_VPORT_ATT_TABLE ((uint32_t)(1))
+#define ZXDH_SDT_PANEL_ATT_TABLE ((uint32_t)(2))
+#define ZXDH_SDT_RSS_ATT_TABLE ((uint32_t)(3))
+#define ZXDH_SDT_VLAN_ATT_TABLE ((uint32_t)(4))
+#define ZXDH_SDT_BROCAST_ATT_TABLE ((uint32_t)(6))
+#define ZXDH_SDT_UNICAST_ATT_TABLE ((uint32_t)(10))
+#define ZXDH_SDT_MULTICAST_ATT_TABLE ((uint32_t)(11))
+
+#define ZXDH_SDT_L2_ENTRY_TABLE0 ((uint32_t)(64))
+#define ZXDH_SDT_L2_ENTRY_TABLE1 ((uint32_t)(65))
+#define ZXDH_SDT_L2_ENTRY_TABLE2 ((uint32_t)(66))
+#define ZXDH_SDT_L2_ENTRY_TABLE3 ((uint32_t)(67))
+#define ZXDH_SDT_L2_ENTRY_TABLE4 ((uint32_t)(68))
+#define ZXDH_SDT_L2_ENTRY_TABLE5 ((uint32_t)(69))
+
+#define ZXDH_SDT_MC_TABLE0 ((uint32_t)(76))
+#define ZXDH_SDT_MC_TABLE1 ((uint32_t)(77))
+#define ZXDH_SDT_MC_TABLE2 ((uint32_t)(78))
+#define ZXDH_SDT_MC_TABLE3 ((uint32_t)(79))
+#define ZXDH_SDT_MC_TABLE4 ((uint32_t)(80))
+#define ZXDH_SDT_MC_TABLE5 ((uint32_t)(81))
+
+#define MK_SDT_NO(table, hash_idx) \
+ (ZXDH_SDT_##table##_TABLE0 + hash_idx)
+
+typedef struct dpp_dtb_addr_info_t {
+ uint32_t sdt_no;
+ uint32_t size;
+ uint32_t phy_addr;
+ uint32_t vir_addr;
+} DPP_DTB_ADDR_INFO_T;
+
+typedef struct dpp_dev_init_ctrl_t {
+ uint32_t vport;
+ char port_name[DPP_PORT_NAME_MAX];
+ uint32_t vector;
+ uint32_t queue_id;
+ uint32_t np_bar_offset;
+ uint32_t np_bar_len;
+ uint32_t pcie_vir_addr;
+ uint32_t down_phy_addr;
+ uint32_t down_vir_addr;
+ uint32_t dump_phy_addr;
+ uint32_t dump_vir_addr;
+ uint32_t dump_sdt_num;
+ DPP_DTB_ADDR_INFO_T dump_addr_info[];
+} DPP_DEV_INIT_CTRL_T;
+
+typedef struct dpp_apt_hash_func_res_t {
+ uint32_t func_id;
+ uint32_t zblk_num;
+ uint32_t zblk_bitmap;
+ uint32_t ddr_dis;
+} DPP_APT_HASH_FUNC_RES_T;
+
+typedef enum dpp_hash_ddr_width_mode {
+ DDR_WIDTH_INVALID = 0,
+ DDR_WIDTH_256b,
+ DDR_WIDTH_512b,
+} DPP_HASH_DDR_WIDTH_MODE;
+
+typedef struct dpp_apt_hash_bulk_res_t {
+ uint32_t func_id;
+ uint32_t bulk_id;
+ uint32_t zcell_num;
+ uint32_t zreg_num;
+ uint32_t ddr_baddr;
+ uint32_t ddr_item_num;
+ DPP_HASH_DDR_WIDTH_MODE ddr_width_mode;
+ uint32_t ddr_crc_sel;
+ uint32_t ddr_ecc_en;
+} DPP_APT_HASH_BULK_RES_T;
+
+
+typedef struct dpp_sdt_tbl_hash_t {
+ uint32_t table_type;
+ uint32_t hash_id;
+ uint32_t hash_table_width;
+ uint32_t key_size;
+ uint32_t hash_table_id;
+ uint32_t learn_en;
+ uint32_t keep_alive;
+ uint32_t keep_alive_baddr;
+ uint32_t rsp_mode;
+ uint32_t hash_clutch_en;
+} DPP_SDTTBL_HASH_T;
+
+typedef struct dpp_hash_entry {
+ uint8_t *p_key;
+ uint8_t *p_rst;
+} DPP_HASH_ENTRY;
+
+
+typedef uint32_t (*DPP_APT_HASH_ENTRY_SET_FUNC)(void *Data, DPP_HASH_ENTRY *Entry);
+typedef uint32_t (*DPP_APT_HASH_ENTRY_GET_FUNC)(void *Data, DPP_HASH_ENTRY *Entry);
+
+typedef struct dpp_apt_hash_table_t {
+ uint32_t sdtNo;
+ uint32_t sdt_partner;
+ DPP_SDTTBL_HASH_T hashSdt;
+ uint32_t tbl_flag;
+ DPP_APT_HASH_ENTRY_SET_FUNC hash_set_func;
+ DPP_APT_HASH_ENTRY_GET_FUNC hash_get_func;
+} DPP_APT_HASH_TABLE_T;
+
+typedef struct dpp_apt_hash_res_init_t {
+ uint32_t func_num;
+ uint32_t bulk_num;
+ uint32_t tbl_num;
+ DPP_APT_HASH_FUNC_RES_T *func_res;
+ DPP_APT_HASH_BULK_RES_T *bulk_res;
+ DPP_APT_HASH_TABLE_T *tbl_res;
+} DPP_APT_HASH_RES_INIT_T;
+
+typedef struct dpp_sdt_tbl_eram_t {
+ uint32_t table_type;
+ uint32_t eram_mode;
+ uint32_t eram_base_addr;
+ uint32_t eram_table_depth;
+ uint32_t eram_clutch_en;
+} DPP_SDTTBL_ERAM_T;
+
+typedef uint32_t (*DPP_APT_ERAM_SET_FUNC)(void *Data, uint32_t buf[4]);
+typedef uint32_t (*DPP_APT_ERAM_GET_FUNC)(void *Data, uint32_t buf[4]);
+
+typedef struct dpp_apt_eram_table_t {
+ uint32_t sdtNo;
+ DPP_SDTTBL_ERAM_T ERamSdt;
+ uint32_t opr_mode;
+ uint32_t rd_mode;
+ DPP_APT_ERAM_SET_FUNC eram_set_func;
+ DPP_APT_ERAM_GET_FUNC eram_get_func;
+} DPP_APT_ERAM_TABLE_T;
+
+
+typedef struct dpp_apt_eram_res_init_t {
+ uint32_t tbl_num;
+ DPP_APT_ERAM_TABLE_T *eram_res;
+} DPP_APT_ERAM_RES_INIT_T;
+
+typedef struct dpp_apt_stat_res_init_t {
+ uint32_t eram_baddr;
+ uint32_t eram_depth;
+ uint32_t ddr_baddr;
+ uint32_t ppu_ddr_offset;
+} DPP_APT_STAT_RES_INIT_T;
+
+typedef enum stat_cnt_mode_e {
+ STAT_64_MODE = 0,
+ STAT_128_MODE = 1,
+ STAT_MAX_MODE,
+} STAT_CNT_MODE_E;
+
+typedef struct dpp_dtb_user_entry_t {
+ uint32_t sdt_no;
+ void *p_entry_data;
+} DPP_DTB_USER_ENTRY_T;
+
+
+int dpp_dtb_hash_offline_delete(uint32_t dev_id, uint32_t queue_id,
+ uint32_t sdt_no, uint32_t flush_mode);
+int dpp_dtb_hash_online_delete(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no);
+int dpp_apt_hash_res_get(uint32_t type, DPP_APT_HASH_RES_INIT_T *HashResInit);
+int dpp_apt_eram_res_get(uint32_t type, DPP_APT_ERAM_RES_INIT_T *EramResInit);
+
+int dpp_apt_stat_res_get(uint32_t type, DPP_APT_STAT_RES_INIT_T *StatResInit);
+int dpp_apt_hash_global_res_init(uint32_t dev_id);
+int dpp_apt_hash_func_res_init(uint32_t dev_id, uint32_t func_num,
+ DPP_APT_HASH_FUNC_RES_T *HashFuncRes);
+int dpp_apt_hash_bulk_res_init(uint32_t dev_id, uint32_t bulk_num,
+ DPP_APT_HASH_BULK_RES_T *BulkRes);
+int dpp_apt_hash_tbl_res_init(uint32_t dev_id, uint32_t tbl_num,
+ DPP_APT_HASH_TABLE_T *HashTbl);
+int dpp_apt_eram_res_init(uint32_t dev_id, uint32_t tbl_num,
+ DPP_APT_ERAM_TABLE_T *EramTbl);
+int dpp_stat_ppu_eram_baddr_set(uint32_t dev_id, uint32_t ppu_eram_baddr);
+int dpp_stat_ppu_eram_depth_set(uint32_t dev_id, uint32_t ppu_eram_depth);
+int dpp_se_cmmu_smmu1_cfg_set(uint32_t dev_id, uint32_t base_addr);
+int dpp_stat_ppu_ddr_baddr_set(uint32_t dev_id, uint32_t ppu_ddr_baddr);
+
+int dpp_host_np_init(uint32_t dev_id, DPP_DEV_INIT_CTRL_T *p_dev_init_ctrl);
+int dpp_np_online_uninstall(uint32_t dev_id,
+ char *port_name,
+ uint32_t queue_id);
+
+int dpp_dtb_stat_ppu_cnt_get(uint32_t dev_id,
+ uint32_t queue_id,
+ STAT_CNT_MODE_E rd_mode,
+ uint32_t index,
+ uint32_t *p_data);
+
+int dpp_dtb_entry_get(uint32_t dev_id,
+ uint32_t queue_id,
+ DPP_DTB_USER_ENTRY_T *GetEntry,
+ uint32_t srh_mode);
+int dpp_dtb_table_entry_write(uint32_t dev_id,
+ uint32_t queue_id,
+ uint32_t entryNum,
+ DPP_DTB_USER_ENTRY_T *DownEntrys);
+int dpp_dtb_table_entry_delete(uint32_t dev_id,
+ uint32_t queue_id,
+ uint32_t entryNum,
+ DPP_DTB_USER_ENTRY_T *DeleteEntrys);
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
new file mode 100644
index 0000000000..b32c2e7955
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -0,0 +1,462 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <unistd.h>
+
+#ifdef RTE_EXEC_ENV_LINUX
+ #include <dirent.h>
+ #include <fcntl.h>
+#endif
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_common.h>
+
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+
+/*
+ * Following macros are derived from linux/pci_regs.h, however,
+ * we can't simply include that header here, as there is no such
+ * file for non-Linux platform.
+ */
+#define PCI_CAPABILITY_LIST 0x34
+#define PCI_CAP_ID_VNDR 0x09
+#define PCI_CAP_ID_MSIX 0x11
+
+/*
+ * The remaining space is defined by each driver as the per-driver
+ * configuration space.
+ */
+#define ZXDH_PCI_CONFIG(hw) (((hw)->use_msix == ZXDH_MSIX_ENABLED) ? 24 : 20)
+#define PCI_MSIX_ENABLE 0x8000
+
+static inline int32_t check_vq_phys_addr_ok(struct virtqueue *vq)
+{
+ /**
+ * Virtio PCI device ZXDH_PCI_QUEUE_PF register is 32bit,
+ * and only accepts 32 bit page frame number.
+ * Check if the allocated physical memory exceeds 16TB.
+ */
+ if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) {
+ PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+ return 0;
+ }
+ return 1;
+}
+static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+ rte_write32(val & ((1ULL << 32) - 1), lo);
+ rte_write32(val >> 32, hi);
+}
+
+static void modern_read_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ void *dst,
+ int32_t length)
+{
+ int32_t i = 0;
+ uint8_t *p = NULL;
+ uint8_t old_gen = 0;
+ uint8_t new_gen = 0;
+
+ do {
+ old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+ p = dst;
+ for (i = 0; i < length; i++)
+ *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+ new_gen = rte_read8(&hw->common_cfg->config_generation);
+ } while (old_gen != new_gen);
+}
+
+static void modern_write_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ const void *src,
+ int32_t length)
+{
+ int32_t i = 0;
+ const uint8_t *p = src;
+
+ for (i = 0; i < length; i++)
+ rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint64_t modern_get_features(struct zxdh_hw *hw)
+{
+ uint32_t features_lo = 0;
+ uint32_t features_hi = 0;
+
+ rte_write32(0, &hw->common_cfg->device_feature_select);
+ features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+ rte_write32(1, &hw->common_cfg->device_feature_select);
+ features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+ return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void modern_set_features(struct zxdh_hw *hw, uint64_t features)
+{
+ rte_write32(0, &hw->common_cfg->guest_feature_select);
+ rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature);
+ rte_write32(1, &hw->common_cfg->guest_feature_select);
+ rte_write32(features >> 32, &hw->common_cfg->guest_feature);
+}
+
+static uint8_t modern_get_status(struct zxdh_hw *hw)
+{
+ return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void modern_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static uint8_t modern_get_isr(struct zxdh_hw *hw)
+{
+ return rte_read8(hw->isr);
+}
+
+static uint16_t modern_set_config_irq(struct zxdh_hw *hw, uint16_t vec)
+{
+ rte_write16(vec, &hw->common_cfg->msix_config);
+ return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t modern_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+ rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+ return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint16_t modern_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static void modern_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ rte_write16(vq_size, &hw->common_cfg->queue_size);
+}
+
+static int32_t modern_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ uint64_t desc_addr = 0;
+ uint64_t avail_addr = 0;
+ uint64_t used_addr = 0;
+ uint16_t notify_off = 0;
+
+ if (!check_vq_phys_addr_ok(vq))
+ return -1;
+
+ desc_addr = vq->vq_ring_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ if (vtpci_packed_queue(vq->hw)) {
+ used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)),
+ ZXDH_PCI_VRING_ALIGN);
+ } else {
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN);
+ }
+
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */
+ notify_off = 0;
+ vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+ notify_off * hw->notify_off_multiplier);
+
+ rte_write16(1, &hw->common_cfg->queue_enable);
+
+ return 0;
+}
+
+static void modern_del_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
+static void modern_notify_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ uint32_t notify_data = 0;
+
+ if (!vtpci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) {
+ rte_write16(vq->vq_queue_index, vq->notify_addr);
+ return;
+ }
+
+ if (vtpci_with_feature(hw, ZXDH_F_RING_PACKED)) {
+ /*
+ * Bit[0:15]: vq queue index
+ * Bit[16:30]: avail index
+ * Bit[31]: avail wrap counter
+ */
+ notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags &
+ VRING_PACKED_DESC_F_AVAIL)) << 31) |
+ ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ } else {
+ /*
+ * Bit[0:15]: vq queue index
+ * Bit[16:31]: avail index
+ */
+ notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index;
+ }
+ PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p",
+ vq->vq_queue_index, notify_data, vq->notify_addr);
+ rte_write32(notify_data, vq->notify_addr);
+}
+
+const struct zxdh_pci_ops zxdh_modern_ops = {
+ .read_dev_cfg = modern_read_dev_config,
+ .write_dev_cfg = modern_write_dev_config,
+ .get_status = modern_get_status,
+ .set_status = modern_set_status,
+ .get_features = modern_get_features,
+ .set_features = modern_set_features,
+ .get_isr = modern_get_isr,
+ .set_config_irq = modern_set_config_irq,
+ .set_queue_irq = modern_set_queue_irq,
+ .get_queue_num = modern_get_queue_num,
+ .set_queue_num = modern_set_queue_num,
+ .setup_queue = modern_setup_queue,
+ .del_queue = modern_del_queue,
+ .notify_queue = modern_notify_queue,
+};
+
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length)
+{
+ VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+void zxdh_vtpci_write_dev_config(struct zxdh_hw *hw, size_t offset, const void *src, int32_t length)
+{
+ VTPCI_OPS(hw)->write_dev_cfg(hw, offset, src, length);
+}
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_features(hw);
+}
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw)
+{
+ PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id);
+ uint32_t retry = 0;
+
+ VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET);
+ /* Flush status write and wait device ready max 3 seconds. */
+ while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) {
+ ++retry;
+ usleep(1000L);
+ }
+ PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
+}
+
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw)
+{
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK);
+}
+
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ if (status != ZXDH_CONFIG_STATUS_RESET)
+ status |= VTPCI_OPS(hw)->get_status(hw);
+
+ VTPCI_OPS(hw)->set_status(hw, status);
+}
+
+uint8_t zxdh_vtpci_get_status(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_status(hw);
+}
+
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_isr(hw);
+}
+
+static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
+{
+ uint8_t bar = cap->bar;
+ uint32_t length = cap->length;
+ uint32_t offset = cap->offset;
+
+ if (bar >= PCI_MAX_RESOURCE) {
+ PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+ return NULL;
+ }
+ if (offset + length < offset) {
+ PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length);
+ return NULL;
+ }
+ if (offset + length > dev->mem_resource[bar].len) {
+ PMD_INIT_LOG(ERR, "invalid cap: overflows bar space: %u > %" PRIu64,
+ offset + length, dev->mem_resource[bar].len);
+ return NULL;
+ }
+ uint8_t *base = dev->mem_resource[bar].addr;
+
+ if (base == NULL) {
+ PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+ return NULL;
+ }
+ return base + offset;
+}
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw)
+{
+ if (dev->mem_resource[0].addr == NULL) {
+ PMD_INIT_LOG(ERR, "bar0 base addr is NULL");
+ return -1;
+ }
+ uint8_t pos = 0;
+ int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret);
+ return -1;
+ }
+ while (pos) {
+ struct zxdh_pci_cap cap;
+
+ ret = rte_pci_read_config(dev, &cap, 2, pos);
+ if (ret != 2) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap.cap_vndr == PCI_CAP_ID_MSIX) {
+ /**
+ * Transitional devices would also have this capability,
+ * that's why we also check if msix is enabled.
+ * 1st byte is cap ID; 2nd byte is the position of next cap;
+ * next two bytes are the flags.
+ */
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2);
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d",
+ pos + 2, ret);
+ break;
+ }
+ hw->use_msix = (flags & PCI_MSIX_ENABLE) ?
+ ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED;
+ }
+ if (cap.cap_vndr != PCI_CAP_ID_VNDR) {
+ PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x",
+ pos, cap.cap_vndr);
+ goto next;
+ }
+ ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+ pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+ switch (cap.cfg_type) {
+ case ZXDH_PCI_CAP_COMMON_CFG:
+ hw->common_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_NOTIFY_CFG: {
+ ret = rte_pci_read_config(dev, &hw->notify_off_multiplier,
+ 4, pos + sizeof(cap));
+ if (ret != 4)
+ PMD_INIT_LOG(ERR,
+ "failed to read notify_off_multiplier, ret %d", ret);
+ else
+ hw->notify_base = get_cfg_addr(dev, &cap);
+ break;
+ }
+ case ZXDH_PCI_CAP_DEVICE_CFG:
+ hw->dev_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_ISR_CFG:
+ hw->isr = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_PCI_CFG: {
+ hw->pcie_id = *(uint16_t *)&cap.padding[1];
+ PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id);
+ uint16_t pcie_id = hw->pcie_id;
+
+ if ((pcie_id >> 11) & 0x1) /* PF */ {
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7);
+ } else { /* VF */
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff);
+ }
+ break;
+ }
+ }
+next:
+ pos = cap.cap_next;
+ }
+ if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+ hw->dev_cfg == NULL || hw->isr == NULL) {
+ PMD_INIT_LOG(ERR, "no modern pci device found.");
+ return -1;
+ }
+ return 0;
+}
+
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev)
+{
+ uint8_t pos = 0;
+ int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret);
+ return ZXDH_MSIX_NONE;
+ }
+ while (pos) {
+ uint8_t cap[2] = {0};
+
+ ret = rte_pci_read_config(dev, cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap[0] == PCI_CAP_ID_MSIX) {
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap));
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR,
+ "failed to read pci cap at pos: %x ret %d", pos + 2, ret);
+ break;
+ }
+ if (flags & PCI_MSIX_ENABLE)
+ return ZXDH_MSIX_ENABLED;
+ else
+ return ZXDH_MSIX_DISABLED;
+ }
+ pos = cap[1];
+ }
+ return ZXDH_MSIX_NONE;
+ }
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
new file mode 100644
index 0000000000..d6f3c552ad
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_PCI_H_
+#define _ZXDH_PCI_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <bus_pci_driver.h>
+#include <ethdev_driver.h>
+
+#include "zxdh_ethdev.h"
+
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define ZXDH_PCI_ISR_INTR 0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define ZXDH_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define ZXDH_MSI_NO_VECTOR 0x7F
+
+/* Status byte for guest to report progress. */
+#define ZXDH_CONFIG_STATUS_RESET 0x00
+#define ZXDH_CONFIG_STATUS_ACK 0x01
+#define ZXDH_CONFIG_STATUS_DRIVER 0x02
+#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04
+#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
+#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
+#define ZXDH_CONFIG_STATUS_FAILED 0x80
+
+/* The feature bitmap for net */
+#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
+#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
+#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */
+#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
+#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
+#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */
+#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */
+#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */
+#define ZXDH_NET_F_HOST_ECN 13 /* Host can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */
+#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
+#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
+#define ZXDH_NET_F_CTRL_VQ 17 /* Control channel available */
+#define ZXDH_NET_F_CTRL_RX 18 /* Control channel RX mode support */
+#define ZXDH_NET_F_CTRL_VLAN 19 /* Control channel VLAN filtering */
+#define ZXDH_NET_F_CTRL_RX_EXTRA 20 /* Extra RX mode control support */
+#define ZXDH_NET_F_GUEST_ANNOUNCE 21 /* Guest can announce device on the network */
+#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
+#define ZXDH_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */
+/* Do we get callbacks when the ring is completely used, even if we've suppressed them? */
+#define ZXDH_F_NOTIFY_ON_EMPTY 24
+#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout? */
+#define VIRTIO_RING_F_INDIRECT_DESC 28 /* We support indirect buffer descriptors */
+#define ZXDH_F_VERSION_1 32
+#define ZXDH_F_IOMMU_PLATFORM 33
+#define ZXDH_F_RING_PACKED 34
+/* Inorder feature indicates that all buffers are used by the device
+ * in the same order in which they have been made available.
+ */
+#define ZXDH_F_IN_ORDER 35
+/** This feature indicates that memory accesses by the driver
+ * and the device are ordered in a way described by the platform.
+ */
+#define ZXDH_F_ORDER_PLATFORM 36
+/**
+ * This feature indicates that the driver passes extra data
+ * (besides identifying the virtqueue) in its device notifications.
+ */
+#define ZXDH_F_NOTIFICATION_DATA 38
+#define ZXDH_NET_F_SPEED_DUPLEX 63 /* Device set linkspeed and duplex */
+
+/* The Guest publishes the used index for which it expects an interrupt
+ * at the end of the avail ring. Host should ignore the avail->flags field.
+ */
+/* The Host publishes the avail index for which it expects a kick
+ * at the end of the used ring. Guest should ignore the used->flags field.
+ */
+#define ZXDH_RING_F_EVENT_IDX 29
+
+/* Maximum number of virtqueues per device. */
+#define ZXDH_MAX_VIRTQUEUE_PAIRS 8
+#define ZXDH_MAX_VIRTQUEUES (ZXDH_MAX_VIRTQUEUE_PAIRS * 2 + 1)
+
+
+#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */
+#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */
+#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */
+#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */
+#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */
+
+#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops)
+#define VTPCI_IO(hw) (&zxdh_hw_internal[(hw)->port_id].io)
+
+/*
+ * How many bits to shift physical queue address written to QUEUE_PFN.
+ * 12 is historical, and due to x86 page size.
+ */
+#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12
+
+/* The alignment to use between consumer and producer parts of vring. */
+#define ZXDH_PCI_VRING_ALIGN 4096
+
+/******BAR0 SPACE********************************************************************/
+#define ZXDH_VQMREG_OFFSET 0x0000
+#define ZXDH_FWCAP_OFFSET 0x1000
+#define ZXDH_CTRLCH_OFFSET 0x2000
+#define ZXDH_MAC_OFFSET 0x24000
+#define ZXDH_SPINLOCK_OFFSET 0x4000
+#define ZXDH_FWSHRD_OFFSET 0x5000
+#define ZXDH_QUERES_SHARE_BASE (ZXDH_FWSHRD_OFFSET)
+#define ZXDH_QUERES_SHARE_SIZE 512
+
+enum zxdh_msix_status {
+ ZXDH_MSIX_NONE = 0,
+ ZXDH_MSIX_DISABLED = 1,
+ ZXDH_MSIX_ENABLED = 2
+};
+
+static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
+{
+ return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_F_RING_PACKED);
+}
+
+/*
+ * While zxdh_hw is stored in shared memory, this structure stores
+ * some infos that may vary in the multiple process model locally.
+ * For example, the vtpci_ops pointer.
+ */
+struct zxdh_hw_internal {
+ const struct zxdh_pci_ops *vtpci_ops;
+ struct rte_pci_ioport io;
+};
+
+/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */
+struct zxdh_pci_common_cfg {
+ /* About the whole device. */
+ uint32_t device_feature_select; /* read-write */
+ uint32_t device_feature; /* read-only */
+ uint32_t guest_feature_select; /* read-write */
+ uint32_t guest_feature; /* read-write */
+ uint16_t msix_config; /* read-write */
+ uint16_t num_queues; /* read-only */
+ uint8_t device_status; /* read-write */
+ uint8_t config_generation; /* read-only */
+
+ /* About a specific virtqueue. */
+ uint16_t queue_select; /* read-write */
+ uint16_t queue_size; /* read-write, power of 2. */
+ uint16_t queue_msix_vector; /* read-write */
+ uint16_t queue_enable; /* read-write */
+ uint16_t queue_notify_off; /* read-only */
+ uint32_t queue_desc_lo; /* read-write */
+ uint32_t queue_desc_hi; /* read-write */
+ uint32_t queue_avail_lo; /* read-write */
+ uint32_t queue_avail_hi; /* read-write */
+ uint32_t queue_used_lo; /* read-write */
+ uint32_t queue_used_hi; /* read-write */
+};
+
+/*
+ * This structure is just a reference to read
+ * net device specific config space; it just a chodu structure
+ *
+ */
+struct zxdh_net_config {
+ /* The config defining mac address (if ZXDH_NET_F_MAC) */
+ uint8_t mac[RTE_ETHER_ADDR_LEN];
+ /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */
+ uint16_t status;
+ uint16_t max_virtqueue_pairs;
+ uint16_t mtu;
+ /*
+ * speed, in units of 1Mb. All values 0 to INT_MAX are legal.
+ * Any other value stands for unknown.
+ */
+ uint32_t speed;
+ /* 0x00 - half duplex
+ * 0x01 - full duplex
+ * Any other value stands for unknown.
+ */
+ uint8_t duplex;
+} __rte_packed;
+
+/* This is the PCI capability header: */
+struct zxdh_pci_cap {
+ uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
+ uint8_t cap_next; /* Generic PCI field: next ptr. */
+ uint8_t cap_len; /* Generic PCI field: capability length */
+ uint8_t cfg_type; /* Identifies the structure. */
+ uint8_t bar; /* Where to find it. */
+ uint8_t padding[3]; /* Pad to full dword. */
+ uint32_t offset; /* Offset within bar. */
+ uint32_t length; /* Length of the structure, in bytes. */
+};
+struct zxdh_pci_notify_cap {
+ struct zxdh_pci_cap cap;
+ uint32_t notify_off_multiplier; /* Multiplier for queue_notify_off. */
+};
+
+struct zxdh_pci_ops {
+ void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
+ void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
+
+ uint8_t (*get_status)(struct zxdh_hw *hw);
+ void (*set_status)(struct zxdh_hw *hw, uint8_t status);
+
+ uint64_t (*get_features)(struct zxdh_hw *hw);
+ void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+
+ uint8_t (*get_isr)(struct zxdh_hw *hw);
+
+ uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
+
+ uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
+
+ uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id);
+ void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size);
+
+ int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
+ void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
+ void (*notify_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
+};
+
+extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+extern const struct zxdh_pci_ops zxdh_modern_ops;
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw);
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw);
+uint8_t zxdh_vtpci_get_status(struct zxdh_hw *hw);
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status);
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+void zxdh_vtpci_write_dev_config(struct zxdh_hw *hw, size_t offset,
+ const void *src, int32_t length);
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset,
+ void *dst, int32_t length);
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_PCI_H_ */
diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c
new file mode 100644
index 0000000000..b6dd487a9d
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.c
@@ -0,0 +1,138 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+
+#include "zxdh_queue.h"
+#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+#include "zxdh_common.h"
+
+/**
+ * Two types of mbuf to be cleaned:
+ * 1) mbuf that has been consumed by backend but not used by virtio.
+ * 2) mbuf that hasn't been consued by backend.
+ */
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq)
+{
+ struct rte_mbuf *cookie = NULL;
+ int32_t idx = 0;
+
+ if (vq == NULL)
+ return NULL;
+
+ for (idx = 0; idx < vq->vq_nentries; idx++) {
+ cookie = vq->vq_descx[idx].cookie;
+ if (cookie != NULL) {
+ vq->vq_descx[idx].cookie = NULL;
+ return cookie;
+ }
+ }
+
+ return NULL;
+}
+
+static int32_t zxdh_release_channel(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t var = 0;
+ uint32_t addr = 0;
+ uint32_t widx = 0;
+ uint32_t bidx = 0;
+ uint16_t pch = 0;
+ uint16_t lch = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ if (zxdh_acquire_lock(hw) != 0) {
+ PMD_INIT_LOG(ERR,
+ "Could not acquire lock to release channel, timeout %d", timeout);
+ continue;
+ }
+ break;
+ }
+
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Acquire lock timeout");
+ return -1;
+ }
+
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (hw->channel_context[lch].valid == 0) {
+ PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
+ continue;
+ }
+
+ /* get coi table offset and index */
+ pch = hw->channel_context[lch].ph_chno;
+ widx = pch / 32;
+ bidx = pch % 32;
+
+ addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
+ var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+ var &= ~(1 << bidx);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+
+ hw->channel_context[lch].valid = 0;
+ hw->channel_context[lch].ph_chno = 0;
+ }
+
+ zxdh_release_lock(hw);
+
+ return 0;
+}
+
+int32_t zxdh_free_queues(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ struct virtqueue *vq = NULL;
+ int32_t queue_type = 0;
+ uint16_t i = 0;
+
+ if (hw->vqs == NULL)
+ return 0;
+
+ /* Clear COI table */
+ if (zxdh_release_channel(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to clear coi table");
+ return -1;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (vq == NULL)
+ continue;
+
+ VTPCI_OPS(hw)->del_queue(hw, vq);
+ queue_type = get_queue_type(i);
+ if (queue_type == VTNET_RQ) {
+ rte_free(vq->sw_ring);
+ rte_memzone_free(vq->rxq.mz);
+ } else if (queue_type == VTNET_TQ) {
+ rte_memzone_free(vq->txq.mz);
+ rte_memzone_free(vq->txq.virtio_net_hdr_mz);
+ }
+
+ rte_free(vq);
+ hw->vqs[i] = NULL;
+ PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
+ }
+
+ rte_free(hw->vqs);
+ hw->vqs = NULL;
+
+ return 0;
+}
+
+int32_t get_queue_type(uint16_t vtpci_queue_idx)
+{
+ if (vtpci_queue_idx % 2 == 0)
+ return VTNET_RQ;
+ else
+ return VTNET_TQ;
+}
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
new file mode 100644
index 0000000000..c2d7bbe889
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_QUEUE_H_
+#define _ZXDH_QUEUE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_atomic.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_net.h>
+#include <ethdev_driver.h>
+
+#include "zxdh_pci.h"
+#include "zxdh_ring.h"
+#include "zxdh_rxtx.h"
+
+
+enum {
+ VTNET_RQ = 0,
+ VTNET_TQ = 1
+};
+
+struct vq_desc_extra {
+ void *cookie;
+ uint16_t ndescs;
+ uint16_t next;
+};
+
+struct virtqueue {
+ struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ bool used_wrap_counter;
+ uint8_t rsv;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ uint16_t rsv1;
+ } __rte_packed vq_packed;
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
+ uint16_t vq_nentries; /**< vring desc numbers */
+ uint16_t vq_free_cnt; /**< num of desc available */
+ uint16_t vq_avail_idx; /**< sync until needed */
+ uint16_t vq_free_thresh; /**< free threshold */
+ uint16_t rsv2;
+
+ void *vq_ring_virt_mem; /**< linear address of vring*/
+ uint32_t vq_ring_size;
+
+ union {
+ struct virtnet_rx rxq;
+ struct virtnet_tx txq;
+ };
+
+ /** < physical address of vring,
+ * or virtual address for virtio_user.
+ **/
+ rte_iova_t vq_ring_mem;
+
+ /**
+ * Head of the free chain in the descriptor table. If
+ * there are no free descriptors, this will be set to
+ * VQ_RING_DESC_CHAIN_END.
+ **/
+ uint16_t vq_desc_head_idx;
+ uint16_t vq_desc_tail_idx;
+ uint16_t vq_queue_index; /**< PCI queue index */
+ uint16_t offset; /**< relative offset to obtain addr in mbuf */
+ uint16_t *notify_addr;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct vq_desc_extra vq_descx[0];
+};
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq);
+int32_t zxdh_free_queues(struct rte_eth_dev *dev);
+int32_t get_queue_type(uint16_t vtpci_queue_idx);
+
+#endif
diff --git a/drivers/net/zxdh/zxdh_ring.h b/drivers/net/zxdh/zxdh_ring.h
new file mode 100644
index 0000000000..bd7c997993
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ring.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_RING_H_
+#define _ZXDH_RING_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_common.h>
+
+/* This marks a buffer as continuing via the next field. */
+#define VRING_DESC_F_NEXT 1
+
+/* This marks a buffer as write-only (otherwise read-only). */
+#define VRING_DESC_F_WRITE 2
+
+/* This means the buffer contains a list of buffer descriptors. */
+#define VRING_DESC_F_INDIRECT 4
+
+/* This flag means the descriptor was made available by the driver */
+#define VRING_PACKED_DESC_F_AVAIL (1 << (7))
+/* This flag means the descriptor was used by the device */
+#define VRING_PACKED_DESC_F_USED (1 << (15))
+
+/* Frequently used combinations */
+#define VRING_PACKED_DESC_F_AVAIL_USED \
+ (VRING_PACKED_DESC_F_AVAIL | VRING_PACKED_DESC_F_USED)
+
+/* The Host uses this in used->flags to advise the Guest: don't kick me
+ * when you add a buffer. It's unreliable, so it's simply an
+ * optimization. Guest will still kick if it's out of buffers.
+ **/
+#define VRING_USED_F_NO_NOTIFY 1
+
+/** The Guest uses this in avail->flags to advise the Host: don't
+ * interrupt me when you consume a buffer. It's unreliable, so it's
+ * simply an optimization.
+ **/
+#define VRING_AVAIL_F_NO_INTERRUPT 1
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+
+/** VirtIO ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ **/
+struct vring_desc {
+ uint64_t addr; /* Address (guest-physical). */
+ uint32_t len; /* Length. */
+ uint16_t flags; /* The flags as indicated above. */
+ uint16_t next; /* We chain unused descriptors via this. */
+};
+
+struct vring_avail {
+ uint16_t flags;
+ uint16_t idx;
+ uint16_t ring[0];
+};
+
+/** For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ **/
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ uint32_t num;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
+#endif
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
new file mode 100644
index 0000000000..7aedf568fe
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_RXTX_H_
+#define _ZXDH_RXTX_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+struct virtnet_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t errors;
+ uint64_t multicast;
+ uint64_t broadcast;
+ uint64_t truncated_err;
+ uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+};
+
+struct virtnet_rx {
+ struct virtqueue *vq;
+
+ /* dummy mbuf, for wraparound when processing RX ring. */
+ struct rte_mbuf fake_mbuf;
+
+ uint64_t mbuf_initializer; /* value to init mbufs. */
+ struct rte_mempool *mpool; /* mempool for mbuf allocation */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate RX ring. */
+};
+
+struct virtnet_tx {
+ struct virtqueue *vq;
+ const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate TX ring. */
+};
+
+#endif
--
2.43.0
[-- Attachment #1.1.2: Type: text/html , Size: 344566 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v4] net/zxdh: Provided zxdh basic init
2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang
@ 2024-09-24 1:35 ` Junlong Wang
2024-09-25 22:39 ` [PATCH v4] " Ferruh Yigit
` (3 subsequent siblings)
4 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-09-24 1:35 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 143 bytes --]
Hi, Ferruh
Could you please provide feedback on the patch I submitted?
Any suggestions for improvement would be appreciated.
Thanks
[-- Attachment #1.1.2: Type: text/html , Size: 289 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v4] net/zxdh: Provided zxdh basic init
2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang
2024-09-24 1:35 ` [v4] " Junlong Wang
@ 2024-09-25 22:39 ` Ferruh Yigit
2024-09-26 6:49 ` [v4] " Junlong Wang
` (2 subsequent siblings)
4 siblings, 0 replies; 76+ messages in thread
From: Ferruh Yigit @ 2024-09-25 22:39 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, wang.yong19
On 9/10/2024 1:00 PM, Junlong Wang wrote:
> provided zxdh initialization of zxdh PMD driver.
> include msg channel, np init and etc.
>
Hi Junlong,
It is very hard to review the driver as a single patch, it helps to
split driver into multiple patches, please check the suggestion on it:
https://patches.dpdk.org/project/dpdk/patch/20240916162856.11566-1-stephen@networkplumber.org/
Also there are errors reported by scripts, please fix them:
./devtools/checkpatches.sh -n1
./devtools/check-meson.py
> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
> ---
> V4: Resolve compilation issues
> V3: Resolve compilation issues
> V2: Resolve compilation issues and modify doc(zxdh.ini zdh.rst)
> V1: Provide zxdh basic init and open source NPSDK lib
> ---
> doc/guides/nics/features/zxdh.ini | 10 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/nics/zxdh.rst | 34 +
> drivers/net/meson.build | 1 +
> drivers/net/zxdh/meson.build | 23 +
> drivers/net/zxdh/zxdh_common.c | 59 ++
> drivers/net/zxdh/zxdh_common.h | 32 +
> drivers/net/zxdh/zxdh_ethdev.c | 1328 +++++++++++++++++++++++++++++
> drivers/net/zxdh/zxdh_ethdev.h | 202 +++++
> drivers/net/zxdh/zxdh_logs.h | 38 +
> drivers/net/zxdh/zxdh_msg.c | 1177 +++++++++++++++++++++++++
> drivers/net/zxdh/zxdh_msg.h | 408 +++++++++
> drivers/net/zxdh/zxdh_npsdk.c | 158 ++++
> drivers/net/zxdh/zxdh_npsdk.h | 216 +++++
> drivers/net/zxdh/zxdh_pci.c | 462 ++++++++++
> drivers/net/zxdh/zxdh_pci.h | 259 ++++++
> drivers/net/zxdh/zxdh_queue.c | 138 +++
> drivers/net/zxdh/zxdh_queue.h | 85 ++
> drivers/net/zxdh/zxdh_ring.h | 87 ++
> drivers/net/zxdh/zxdh_rxtx.h | 48 ++
> 20 files changed, 4766 insertions(+)
> create mode 100644 doc/guides/nics/features/zxdh.ini
> create mode 100644 doc/guides/nics/zxdh.rst
> create mode 100644 drivers/net/zxdh/meson.build
> create mode 100644 drivers/net/zxdh/zxdh_common.c
> create mode 100644 drivers/net/zxdh/zxdh_common.h
> create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
> create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
> create mode 100644 drivers/net/zxdh/zxdh_logs.h
> create mode 100644 drivers/net/zxdh/zxdh_msg.c
> create mode 100644 drivers/net/zxdh/zxdh_msg.h
> create mode 100644 drivers/net/zxdh/zxdh_npsdk.c
> create mode 100644 drivers/net/zxdh/zxdh_npsdk.h
> create mode 100644 drivers/net/zxdh/zxdh_pci.c
> create mode 100644 drivers/net/zxdh/zxdh_pci.h
> create mode 100644 drivers/net/zxdh/zxdh_queue.c
> create mode 100644 drivers/net/zxdh/zxdh_queue.h
> create mode 100644 drivers/net/zxdh/zxdh_ring.h
> create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
>
> diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/
> features/zxdh.ini
> new file mode 100644
> index 0000000000..083c75511b
> --- /dev/null
> +++ b/doc/guides/nics/features/zxdh.ini
> @@ -0,0 +1,10 @@
> +;
> +; Supported features of the 'zxdh' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux = Y
> +x86-64 = Y
> +ARMv8 = Y
> +
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index c14bc7988a..8e371ac4a5 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -69,3 +69,4 @@ Network Interface Controller Drivers
> vhost
> virtio
> vmxnet3
> + zxdh
> diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
> new file mode 100644
> index 0000000000..e878058b7b
> --- /dev/null
> +++ b/doc/guides/nics/zxdh.rst
> @@ -0,0 +1,34 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2023 ZTE Corporation.
> +
> +ZXDH Poll Mode Driver
> +======================
> +
> +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
> +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
> +the ZTE Ethernet Controller E310/E312.
> +
>
Can you please provide a link for the product?
There is a link in the prerequisetes section below, if that link is for
product please move it here.
> +
> +Features
> +--------
> +
> +Features of the zxdh PMD are:
> +
> +- Multi arch support: x86_64, ARMv8.
> +
> +Prerequisites
> +-------------
> +
> +- Learning about ZXDH NX Series Ethernet Controller NICs using
> + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +for details.
> +
> +Limitations or Known issues
> +---------------------------
> +X86-32, Power8, ARMv7 and BSD are not supported yet.
> +
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index fb6d34b782..1a3db8a04d 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -62,6 +62,7 @@ drivers = [
> 'vhost',
> 'virtio',
> 'vmxnet3',
> + 'zxdh',
> ]
> std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
> std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
> diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
> new file mode 100644
> index 0000000000..593e3c5933
> --- /dev/null
> +++ b/drivers/net/zxdh/meson.build
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2024 ZTE Corporation
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on Linux'
> + subdir_done()
> +endif
> +
> +if arch_subdir != 'x86' and arch_subdir !
> = 'arm' or not dpdk_conf.get('RTE_ARCH_64')
>
Why not check 'RTE_ARCH_X86_64' and 'RTE_ARCH_ARM64'?
<...>
> +/* dev_ops for zxdh, bare necessities for basic operation */
> +static const struct eth_dev_ops zxdh_eth_dev_ops = {
> + .dev_configure = NULL,
> + .dev_start = NULL,
> + .dev_stop = NULL,
> + .dev_close = NULL,
> +
> + .rx_queue_setup = NULL,
> + .rx_queue_intr_enable = NULL,
> + .rx_queue_intr_disable = NULL,
> +
> + .tx_queue_setup = NULL,
> +};
>
No ops is implemented, so when you run dpdk application when your device
exist, what happens, does application crash?
<...>
> +static int32_t zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev __rte_unused)
> +{
> + if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> + return 0;
> + /** todo later
> + * zxdh_dev_close(eth_dev);
>
Please either remove or implements todo comments in upstream driver.
> + */
> + return 0;
> +}
> +
> +int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
> +{
> + int32_t ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
> +
> + if (ret == -ENODEV) { /* Port has already been released by close. */
> + ret = 0;
> + }
> + return ret;
> +}
> +
> +static const struct rte_pci_id pci_id_zxdh_map[] = {
> + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)},
> + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)},
> + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)},
> + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)},
> + {.vendor_id = 0, /* sentinel */ },
> +};
> +static struct rte_pci_driver zxdh_pmd = {
> + .driver = {.name = "net_zxdh", },
>
'driver.name' is already set by 'RTE_PMD_REGISTER_PCI' macro, no need to
set above.
> + .id_table = pci_id_zxdh_map,
> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
> + .probe = zxdh_eth_pci_probe,
> + .remove = zxdh_eth_pci_remove,
> +};
> +
> +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
> +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
> +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
> +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, DEBUG);
> +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, DEBUG);
> +
> +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, DEBUG);
>
Default 'DEBUG' log level is too verbose, mostly default is NOTICE.
> +RTE_PMD_REGISTER_PARAM_STRING(net_zxdh,
> + "q_depth=<int>");
>
Please document device arguments in the driver documentation.
<...>
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v4] net/zxdh: Provided zxdh basic init
2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang
2024-09-24 1:35 ` [v4] " Junlong Wang
2024-09-25 22:39 ` [PATCH v4] " Ferruh Yigit
@ 2024-09-26 6:49 ` Junlong Wang
2024-10-07 21:43 ` [PATCH v4] " Stephen Hemminger
2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
4 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-09-26 6:49 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 450 bytes --]
>> +if not is_linux
>> + build = false
>> + reason = 'only supported on Linux'
>> + subdir_done()
>> +endif
>> +
>> +if arch_subdir != 'x86' and arch_subdir !
>> = 'arm' or not dpdk_conf.get('RTE_ARCH_64')
>>
>Why not check 'RTE_ARCH_X86_64' and 'RTE_ARCH_ARM64'?
we will fix it and use 'RTE_ARCH_X86_64' and 'RTE_ARCH_ARM64' to check,
other comments will be modified, and split the driver into multiple patches.
Thanks!
[-- Attachment #1.1.2: Type: text/html , Size: 987 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v4] net/zxdh: Provided zxdh basic init
2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang
` (2 preceding siblings ...)
2024-09-26 6:49 ` [v4] " Junlong Wang
@ 2024-10-07 21:43 ` Stephen Hemminger
2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
4 siblings, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-07 21:43 UTC (permalink / raw)
To: Junlong Wang; +Cc: ferruh.yigit, dev, wang.yong19
On Tue, 10 Sep 2024 20:00:20 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
> new file mode 100644
> index 0000000000..e878058b7b
> --- /dev/null
> +++ b/doc/guides/nics/zxdh.rst
> @@ -0,0 +1,34 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2023 ZTE Corporation.
> +
> +ZXDH Poll Mode Driver
> +======================
> +
> +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
> +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
> +the ZTE Ethernet Controller E310/E312.
> +
> +
> +Features
> +--------
> +
> +Features of the zxdh PMD are:
> +
> +- Multi arch support: x86_64, ARMv8.
> +
> +Prerequisites
> +-------------
> +
> +- Learning about ZXDH NX Series Ethernet Controller NICs using
> + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +for details.
> +
> +Limitations or Known issues
> +---------------------------
> +X86-32, Power8, ARMv7 and BSD are not supported yet.
> +
>
Note: git gives warning when merging this.
/home/shemminger/DPDK/main/.git/worktrees/zxdh/rebase-apply/patch:78: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 0/9] net/zxdh: introduce net zxdh driver
2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang
` (3 preceding siblings ...)
2024-10-07 21:43 ` [PATCH v4] " Stephen Hemminger
@ 2024-10-15 5:43 ` Junlong Wang
2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
4 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:43 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 2472 bytes --]
V5:
- split driver into multiple patches,part of the zxdh driver,
later provide dev start/stop,queue_setup,mac,vlan,rss ,etc.
- fix errors reported by scripts.
- move the product link in zxdh.rst.
- fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64.
- modify other comments according to Ferruh's comments.
V4:
- Resolve compilation issues
Junlong Wang (9):
net/zxdh: add zxdh ethdev pmd driver
net/zxdh: add logging implementation
net/zxdh: add zxdh device pci init implementation
net/zxdh: add msg chan and msg hwlock init
net/zxdh: add msg chan enable implementation
net/zxdh: add zxdh get device backend infos
net/zxdh: add configure zxdh intr implementation
net/zxdh: add zxdh dev infos get ops
net/zxdh: add zxdh dev configure ops
doc/guides/nics/features/zxdh.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 30 +
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 22 +
drivers/net/zxdh/zxdh_common.c | 368 +++++++++++
drivers/net/zxdh/zxdh_common.h | 42 ++
drivers/net/zxdh/zxdh_ethdev.c | 1021 +++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 105 +++
drivers/net/zxdh/zxdh_logs.h | 35 +
drivers/net/zxdh/zxdh_msg.c | 992 ++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 228 +++++++
drivers/net/zxdh/zxdh_pci.c | 449 +++++++++++++
drivers/net/zxdh/zxdh_pci.h | 191 ++++++
drivers/net/zxdh/zxdh_queue.c | 131 ++++
drivers/net/zxdh/zxdh_queue.h | 280 ++++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 ++
17 files changed, 3960 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
create mode 100644 drivers/net/zxdh/zxdh_logs.h
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.c
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 4566 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver
2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
@ 2024-10-15 5:43 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
0 siblings, 2 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:43 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 7509 bytes --]
Add basic zxdh ethdev init and register PCI probe functions
Update doc files
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
doc/guides/nics/features/zxdh.ini | 9 +++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 30 ++++++++++
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 18 ++++++
drivers/net/zxdh/zxdh_ethdev.c | 92 +++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 44 +++++++++++++++
7 files changed, 195 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
new file mode 100644
index 0000000000..05c8091ed7
--- /dev/null
+++ b/doc/guides/nics/features/zxdh.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'zxdh' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..8e371ac4a5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -69,3 +69,4 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ zxdh
diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
new file mode 100644
index 0000000000..4cf531e967
--- /dev/null
+++ b/doc/guides/nics/zxdh.rst
@@ -0,0 +1,30 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2023 ZTE Corporation.
+
+ZXDH Poll Mode Driver
+======================
+
+The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
+for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
+the ZTE Ethernet Controller E310/E312.
+
+- Learning about ZXDH NX Series Ethernet Controller NICs using
+ `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
+
+Features
+--------
+
+Features of the zxdh PMD are:
+
+- Multi arch support: x86_64, ARMv8.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+X86-32, Power8, ARMv7 and BSD are not supported yet.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b782..0a12914534 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -62,6 +62,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
new file mode 100644
index 0000000000..58e39c1f96
--- /dev/null
+++ b/drivers/net/zxdh/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 ZTE Corporation
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+sources = files(
+ 'zxdh_ethdev.c',
+ )
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
new file mode 100644
index 0000000000..75d8b28cc3
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <ethdev_pci.h>
+#include <bus_pci_driver.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ int ret = 0;
+
+ eth_dev->dev_ops = NULL;
+
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL)
+ return -ENOMEM;
+
+ memset(hw, 0, sizeof(*hw));
+ hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
+ if (hw->bar_addr[0] == 0)
+ return -EIO;
+
+ hw->device_id = pci_dev->id.device_id;
+ hw->port_id = eth_dev->data->port_id;
+ hw->eth_dev = eth_dev;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ hw->is_pf = 0;
+
+ if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID ||
+ pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) {
+ hw->is_pf = 1;
+ }
+
+ return ret;
+}
+
+static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct zxdh_hw),
+ zxdh_eth_dev_init);
+}
+
+static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+ int ret = 0;
+
+ return ret;
+}
+
+static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ int ret = 0;
+
+ ret = zxdh_dev_close(eth_dev);
+
+ return ret;
+}
+
+static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
+
+ return ret;
+}
+
+static const struct rte_pci_id pci_id_zxdh_map[] = {
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+static struct rte_pci_driver zxdh_pmd = {
+ .id_table = pci_id_zxdh_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = zxdh_eth_pci_probe,
+ .remove = zxdh_eth_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
new file mode 100644
index 0000000000..04023bfe84
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_ETHDEV_H_
+#define _ZXDH_ETHDEV_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "ethdev_driver.h"
+
+/* ZXDH PCI vendor/device ID. */
+#define PCI_VENDOR_ID_ZTE 0x1cf2
+
+#define ZXDH_E310_PF_DEVICEID 0x8061
+#define ZXDH_E310_VF_DEVICEID 0x8062
+#define ZXDH_E312_PF_DEVICEID 0x8049
+#define ZXDH_E312_VF_DEVICEID 0x8060
+
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+#define ZXDH_NUM_BARS 2
+
+struct zxdh_hw {
+ struct rte_eth_dev *eth_dev;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+
+ uint32_t speed;
+ uint16_t device_id;
+ uint16_t port_id;
+
+ uint8_t duplex;
+ uint8_t is_pf;
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_ETHDEV_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 14136 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 2/9] net/zxdh: add logging implementation
2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
` (6 more replies)
2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
1 sibling, 7 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3287 bytes --]
Adds zxdh logging implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++++--
drivers/net/zxdh/zxdh_logs.h | 35 ++++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_logs.h
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 75d8b28cc3..7220770c01 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -7,6 +7,7 @@
#include <rte_ethdev.h>
#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL)
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
return -ENOMEM;
+ }
memset(hw, 0, sizeof(*hw));
hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
- if (hw->bar_addr[0] == 0)
+ if (hw->bar_addr[0] == 0) {
+ PMD_INIT_LOG(ERR, "Bad mem resource.");
return -EIO;
+ }
hw->device_id = pci_dev->id.device_id;
hw->port_id = eth_dev->data->port_id;
@@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = {
RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE);
diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h
new file mode 100644
index 0000000000..e118d26379
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_logs.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_LOGS_H_
+#define _ZXDH_LOGS_H_
+
+#include <rte_log.h>
+
+extern int zxdh_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_init, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int zxdh_logtype_driver;
+#define PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_driver, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int zxdh_logtype_rx;
+#define PMD_RX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_rx, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int zxdh_logtype_tx;
+#define PMD_TX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_tx, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+extern int zxdh_logtype_msg;
+#define PMD_MSG_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, zxdh_logtype_msg, \
+ "offload_zxdh %s(): " fmt "\n", __func__, ## args)
+
+#endif /* _ZXDH_LOGS_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6031 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
` (5 subsequent siblings)
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 23053 bytes --]
Add device pci init implementation,
to obtain PCI capability and read configuration, etc
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 5 +-
drivers/net/zxdh/zxdh_ethdev.c | 43 +++++
drivers/net/zxdh/zxdh_ethdev.h | 20 ++-
drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++
drivers/net/zxdh/zxdh_queue.h | 109 +++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 +++++++
7 files changed, 669 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 58e39c1f96..080c6c7725 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -14,5 +14,6 @@ if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
endif
sources = files(
- 'zxdh_ethdev.c',
- )
+ 'zxdh_ethdev.c',
+ 'zxdh_pci.c',
+ )
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 7220770c01..bb219c189f 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -8,6 +8,40 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+
+struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+
+static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
+{
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = 0;
+
+ ret = zxdh_read_pci_caps(pci_dev, hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id);
+ goto err;
+ }
+
+ zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_dev_pci_ops;
+ zxdh_vtpci_reset(hw);
+ zxdh_get_pci_dev_config(hw);
+
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]);
+
+ /* If host does not support both status and MSI-X then disable LSC */
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && (hw->use_msix != ZXDH_MSIX_NONE))
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+ else
+ eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+
+ return 0;
+
+err:
+ PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
+ return ret;
+}
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
hw->is_pf = 1;
}
+ ret = zxdh_init_device(eth_dev);
+ if (ret < 0)
+ goto err_zxdh_init;
+
+ return ret;
+
+err_zxdh_init:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
return ret;
}
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 04023bfe84..18d9916713 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -9,6 +9,7 @@
extern "C" {
#endif
+#include <rte_ether.h>
#include "ethdev_driver.h"
/* ZXDH PCI vendor/device ID. */
@@ -23,16 +24,31 @@ extern "C" {
#define ZXDH_MAX_MC_MAC_ADDRS 32
#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
+
#define ZXDH_NUM_BARS 2
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
- uint64_t bar_addr[ZXDH_NUM_BARS];
+ struct zxdh_pci_common_cfg *common_cfg;
+ struct zxdh_net_config *dev_cfg;
- uint32_t speed;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+ uint64_t host_features;
+ uint64_t guest_features;
+ uint32_t max_queue_pairs;
+ uint32_t speed;
+ uint32_t notify_off_multiplier;
+ uint16_t *notify_base;
+ uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint8_t *isr;
+ uint8_t weak_barriers;
+ uint8_t use_msix;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
};
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
new file mode 100644
index 0000000000..73ec640b84
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -0,0 +1,290 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <unistd.h>
+
+#ifdef RTE_EXEC_ENV_LINUX
+ #include <dirent.h>
+ #include <fcntl.h>
+#endif
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+
+#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA | \
+ 1ULL << ZXDH_NET_F_MAC)
+
+static void zxdh_read_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ void *dst,
+ int32_t length)
+{
+ int32_t i = 0;
+ uint8_t *p = NULL;
+ uint8_t old_gen = 0;
+ uint8_t new_gen = 0;
+
+ do {
+ old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+ p = dst;
+ for (i = 0; i < length; i++)
+ *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+ new_gen = rte_read8(&hw->common_cfg->config_generation);
+ } while (old_gen != new_gen);
+}
+
+static void zxdh_write_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ const void *src,
+ int32_t length)
+{
+ int32_t i = 0;
+ const uint8_t *p = src;
+
+ for (i = 0; i < length; i++)
+ rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint8_t zxdh_get_status(struct zxdh_hw *hw)
+{
+ return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static uint64_t zxdh_get_features(struct zxdh_hw *hw)
+{
+ uint32_t features_lo = 0;
+ uint32_t features_hi = 0;
+
+ rte_write32(0, &hw->common_cfg->device_feature_select);
+ features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+ rte_write32(1, &hw->common_cfg->device_feature_select);
+ features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+ return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
+{
+ rte_write32(0, &hw->common_cfg->guest_feature_select);
+ rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature);
+ rte_write32(1, &hw->common_cfg->guest_feature_select);
+ rte_write32(features >> 32, &hw->common_cfg->guest_feature);
+}
+
+const struct zxdh_pci_ops zxdh_dev_pci_ops = {
+ .read_dev_cfg = zxdh_read_dev_config,
+ .write_dev_cfg = zxdh_write_dev_config,
+ .get_status = zxdh_get_status,
+ .set_status = zxdh_set_status,
+ .get_features = zxdh_get_features,
+ .set_features = zxdh_set_features,
+};
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_features(hw);
+}
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw)
+{
+ PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id);
+ uint32_t retry = 0;
+
+ VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET);
+ /* Flush status write and wait device ready max 3 seconds. */
+ while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) {
+ ++retry;
+ usleep(1000L);
+ }
+ PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
+}
+
+static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
+{
+ uint8_t bar = cap->bar;
+ uint32_t length = cap->length;
+ uint32_t offset = cap->offset;
+
+ if (bar >= PCI_MAX_RESOURCE) {
+ PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+ return NULL;
+ }
+ if (offset + length < offset) {
+ PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length);
+ return NULL;
+ }
+ if (offset + length > dev->mem_resource[bar].len) {
+ PMD_INIT_LOG(ERR, "invalid cap: overflows bar space: %u > %" PRIu64,
+ offset + length, dev->mem_resource[bar].len);
+ return NULL;
+ }
+ uint8_t *base = dev->mem_resource[bar].addr;
+
+ if (base == NULL) {
+ PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+ return NULL;
+ }
+ return base + offset;
+}
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw)
+{
+ uint8_t pos = 0;
+ int32_t ret = 0;
+
+ if (dev->mem_resource[0].addr == NULL) {
+ PMD_INIT_LOG(ERR, "bar0 base addr is NULL");
+ return -1;
+ }
+
+ ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret);
+ return -1;
+ }
+ while (pos) {
+ struct zxdh_pci_cap cap;
+
+ ret = rte_pci_read_config(dev, &cap, 2, pos);
+ if (ret != 2) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap.cap_vndr == PCI_CAP_ID_MSIX) {
+ /**
+ * Transitional devices would also have this capability,
+ * that's why we also check if msix is enabled.
+ * 1st byte is cap ID; 2nd byte is the position of next cap;
+ * next two bytes are the flags.
+ */
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2);
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d",
+ pos + 2, ret);
+ break;
+ }
+ hw->use_msix = (flags & PCI_MSIX_ENABLE) ?
+ ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED;
+ }
+ if (cap.cap_vndr != PCI_CAP_ID_VNDR) {
+ PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x",
+ pos, cap.cap_vndr);
+ goto next;
+ }
+ ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+ pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+ switch (cap.cfg_type) {
+ case ZXDH_PCI_CAP_COMMON_CFG:
+ hw->common_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_NOTIFY_CFG: {
+ ret = rte_pci_read_config(dev, &hw->notify_off_multiplier,
+ 4, pos + sizeof(cap));
+ if (ret != 4)
+ PMD_INIT_LOG(ERR,
+ "failed to read notify_off_multiplier, ret %d", ret);
+ else
+ hw->notify_base = get_cfg_addr(dev, &cap);
+ break;
+ }
+ case ZXDH_PCI_CAP_DEVICE_CFG:
+ hw->dev_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_ISR_CFG:
+ hw->isr = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_PCI_CFG: {
+ hw->pcie_id = *(uint16_t *)&cap.padding[1];
+ PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id);
+ uint16_t pcie_id = hw->pcie_id;
+
+ if ((pcie_id >> 11) & 0x1) /* PF */ {
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7);
+ } else { /* VF */
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff);
+ }
+ break;
+ }
+ }
+next:
+ pos = cap.cap_next;
+ }
+ if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+ hw->dev_cfg == NULL || hw->isr == NULL) {
+ PMD_INIT_LOG(ERR, "no zxdh pci device found.");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length)
+{
+ VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
+{
+ uint64_t guest_features = 0;
+ uint64_t nego_features = 0;
+ uint32_t max_queue_pairs = 0;
+
+ hw->host_features = zxdh_vtpci_get_features(hw);
+
+ guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
+ nego_features = guest_features & hw->host_features;
+
+ hw->guest_features = nego_features;
+
+ if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
+ &hw->mac_addr, RTE_ETHER_ADDR_LEN);
+ } else {
+ rte_eth_random_addr(&hw->mac_addr[0]);
+ }
+
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
+ &max_queue_pairs, sizeof(max_queue_pairs));
+
+ if (max_queue_pairs == 0)
+ hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
+ else
+ hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
+ PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs);
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
new file mode 100644
index 0000000000..deda73a65a
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_PCI_H_
+#define _ZXDH_PCI_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#include <bus_pci_driver.h>
+
+#include "zxdh_ethdev.h"
+
+enum zxdh_msix_status {
+ ZXDH_MSIX_NONE = 0,
+ ZXDH_MSIX_DISABLED = 1,
+ ZXDH_MSIX_ENABLED = 2
+};
+
+#define PCI_CAPABILITY_LIST 0x34
+#define PCI_CAP_ID_VNDR 0x09
+#define PCI_CAP_ID_MSIX 0x11
+
+#define PCI_MSIX_ENABLE 0x8000
+
+#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
+#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
+#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
+#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */
+#define ZXDH_F_VERSION_1 32
+#define ZXDH_F_RING_PACKED 34
+#define ZXDH_F_IN_ORDER 35
+#define ZXDH_F_NOTIFICATION_DATA 38
+
+#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */
+#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */
+#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */
+#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */
+#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */
+
+/* Status byte for guest to report progress. */
+#define ZXDH_CONFIG_STATUS_RESET 0x00
+#define ZXDH_CONFIG_STATUS_ACK 0x01
+#define ZXDH_CONFIG_STATUS_DRIVER 0x02
+#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04
+#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
+#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
+#define ZXDH_CONFIG_STATUS_FAILED 0x80
+
+struct zxdh_net_config {
+ /* The config defining mac address (if ZXDH_NET_F_MAC) */
+ uint8_t mac[RTE_ETHER_ADDR_LEN];
+ /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */
+ uint16_t status;
+ uint16_t max_virtqueue_pairs;
+ uint16_t mtu;
+ /*
+ * speed, in units of 1Mb. All values 0 to INT_MAX are legal.
+ * Any other value stands for unknown.
+ */
+ uint32_t speed;
+ /* 0x00 - half duplex
+ * 0x01 - full duplex
+ * Any other value stands for unknown.
+ */
+ uint8_t duplex;
+} __rte_packed;
+
+/* This is the PCI capability header: */
+struct zxdh_pci_cap {
+ uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
+ uint8_t cap_next; /* Generic PCI field: next ptr. */
+ uint8_t cap_len; /* Generic PCI field: capability length */
+ uint8_t cfg_type; /* Identifies the structure. */
+ uint8_t bar; /* Where to find it. */
+ uint8_t padding[3]; /* Pad to full dword. */
+ uint32_t offset; /* Offset within bar. */
+ uint32_t length; /* Length of the structure, in bytes. */
+};
+
+/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */
+struct zxdh_pci_common_cfg {
+ /* About the whole device. */
+ uint32_t device_feature_select; /* read-write */
+ uint32_t device_feature; /* read-only */
+ uint32_t guest_feature_select; /* read-write */
+ uint32_t guest_feature; /* read-write */
+ uint16_t msix_config; /* read-write */
+ uint16_t num_queues; /* read-only */
+ uint8_t device_status; /* read-write */
+ uint8_t config_generation; /* read-only */
+
+ /* About a specific virtqueue. */
+ uint16_t queue_select; /* read-write */
+ uint16_t queue_size; /* read-write, power of 2. */
+ uint16_t queue_msix_vector; /* read-write */
+ uint16_t queue_enable; /* read-write */
+ uint16_t queue_notify_off; /* read-only */
+ uint32_t queue_desc_lo; /* read-write */
+ uint32_t queue_desc_hi; /* read-write */
+ uint32_t queue_avail_lo; /* read-write */
+ uint32_t queue_avail_hi; /* read-write */
+ uint32_t queue_used_lo; /* read-write */
+ uint32_t queue_used_hi; /* read-write */
+};
+
+static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
+{
+ return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+struct zxdh_pci_ops {
+ void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
+ void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
+
+ uint8_t (*get_status)(struct zxdh_hw *hw);
+ void (*set_status)(struct zxdh_hw *hw, uint8_t status);
+
+ uint64_t (*get_features)(struct zxdh_hw *hw);
+ void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+};
+
+struct zxdh_hw_internal {
+ const struct zxdh_pci_ops *vtpci_ops;
+};
+
+#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops)
+
+extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+extern const struct zxdh_pci_ops zxdh_dev_pci_ops;
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw);
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset,
+ void *dst, int32_t length);
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_PCI_H_ */
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
new file mode 100644
index 0000000000..e511843205
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_QUEUE_H_
+#define _ZXDH_QUEUE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_rxtx.h"
+
+/** ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ **/
+struct vring_desc {
+ uint64_t addr; /* Address (guest-physical). */
+ uint32_t len; /* Length. */
+ uint16_t flags; /* The flags as indicated above. */
+ uint16_t next; /* We chain unused descriptors via this. */
+};
+
+struct vring_avail {
+ uint16_t flags;
+ uint16_t idx;
+ uint16_t ring[0];
+};
+
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ uint32_t num;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
+struct vq_desc_extra {
+ void *cookie;
+ uint16_t ndescs;
+ uint16_t next;
+};
+
+struct virtqueue {
+ struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ uint8_t used_wrap_counter;
+ uint8_t rsv;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ uint16_t rsv1;
+ } __rte_packed vq_packed;
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
+ uint16_t vq_nentries; /**< vring desc numbers */
+ uint16_t vq_free_cnt; /**< num of desc available */
+ uint16_t vq_avail_idx; /**< sync until needed */
+ uint16_t vq_free_thresh; /**< free threshold */
+ uint16_t rsv2;
+
+ void *vq_ring_virt_mem; /**< linear address of vring*/
+ uint32_t vq_ring_size;
+
+ union {
+ struct virtnet_rx rxq;
+ struct virtnet_tx txq;
+ };
+
+ /** < physical address of vring,
+ * or virtual address for virtio_user.
+ **/
+ rte_iova_t vq_ring_mem;
+
+ /**
+ * Head of the free chain in the descriptor table. If
+ * there are no free descriptors, this will be set to
+ * VQ_RING_DESC_CHAIN_END.
+ **/
+ uint16_t vq_desc_head_idx;
+ uint16_t vq_desc_tail_idx;
+ uint16_t vq_queue_index; /**< PCI queue index */
+ uint16_t offset; /**< relative offset to obtain addr in mbuf */
+ uint16_t *notify_addr;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct vq_desc_extra vq_descx[0];
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_QUEUE_H_ */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
new file mode 100644
index 0000000000..6476bc15e2
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_RXTX_H_
+#define _ZXDH_RXTX_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf_core.h>
+
+struct virtnet_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t errors;
+ uint64_t multicast;
+ uint64_t broadcast;
+ uint64_t truncated_err;
+ uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+};
+
+struct virtnet_rx {
+ struct virtqueue *vq;
+
+ /* dummy mbuf, for wraparound when processing RX ring. */
+ struct rte_mbuf fake_mbuf;
+
+ uint64_t mbuf_initializer; /* value to init mbufs. */
+ struct rte_mempool *mpool; /* mempool for mbuf allocation */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate RX ring. */
+};
+
+struct virtnet_tx {
+ struct virtqueue *vq;
+ const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate TX ring. */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_RXTX_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 54173 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
` (4 subsequent siblings)
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 8728 bytes --]
Add msg channel and hwlock init implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 15 +++
drivers/net/zxdh/zxdh_ethdev.h | 1 +
drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 65 +++++++++++++
5 files changed, 243 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 080c6c7725..9d0b5b9fd3 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -16,4 +16,5 @@ endif
sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
+ 'zxdh_msg.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index bb219c189f..66b57c4e59 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -9,6 +9,7 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
#include "zxdh_pci.h"
+#include "zxdh_msg.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret < 0)
goto err_zxdh_init;
+ ret = zxdh_msg_chan_init();
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "Failed to init bar msg chan");
+ goto err_zxdh_init;
+ }
+ hw->msg_chan_init = 1;
+
+ ret = zxdh_msg_chan_hwlock_init(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
+ zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
return ret;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 18d9916713..24eb3a5ca0 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -51,6 +51,7 @@ struct zxdh_hw {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
+ uint8_t msg_chan_init;
};
#ifdef __cplusplus
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
new file mode 100644
index 0000000000..4928711ad8
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <pthread.h>
+#include <rte_cycles.h>
+#include <inttypes.h>
+#include <rte_malloc.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+
+#define REPS_INFO_FLAG_USABLE 0x00
+#define BAR_SEQID_NUM_MAX 256
+
+#define ZXDH_BAR0_INDEX 0
+
+#define PCIEID_IS_PF_MASK (0x0800)
+#define PCIEID_PF_IDX_MASK (0x0700)
+#define PCIEID_VF_IDX_MASK (0x00ff)
+#define PCIEID_EP_IDX_MASK (0x7000)
+/* PCIEID bit field offset */
+#define PCIEID_PF_IDX_OFFSET (8)
+#define PCIEID_EP_IDX_OFFSET (12)
+
+#define MULTIPLY_BY_8(x) ((x) << 3)
+#define MULTIPLY_BY_32(x) ((x) << 5)
+#define MULTIPLY_BY_256(x) ((x) << 8)
+
+#define MAX_EP_NUM (4)
+#define MAX_HARD_SPINLOCK_NUM (511)
+
+#define BAR0_SPINLOCK_OFFSET (0x4000)
+#define FW_SHRD_OFFSET (0x5000)
+#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
+
+struct dev_stat {
+ bool is_mpf_scanned;
+ bool is_res_init;
+ int16_t dev_cnt; /* probe cnt */
+};
+struct dev_stat g_dev_stat = {0};
+
+struct seqid_item {
+ void *reps_addr;
+ uint16_t id;
+ uint16_t buffer_len;
+ uint16_t flag;
+};
+
+struct seqid_ring {
+ uint16_t cur_id;
+ pthread_spinlock_t lock;
+ struct seqid_item reps_info_tbl[BAR_SEQID_NUM_MAX];
+};
+struct seqid_ring g_seqid_ring = {0};
+
+static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+{
+ uint16_t lock_id = 0;
+ uint16_t pf_idx = (src_pcieid & PCIEID_PF_IDX_MASK) >> PCIEID_PF_IDX_OFFSET;
+ uint16_t ep_idx = (src_pcieid & PCIEID_EP_IDX_MASK) >> PCIEID_EP_IDX_OFFSET;
+
+ switch (dst) {
+ /* msg to risc */
+ case MSG_CHAN_END_RISC:
+ lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx;
+ break;
+ /* msg to pf/vf */
+ case MSG_CHAN_END_VF:
+ case MSG_CHAN_END_PF:
+ lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx + MULTIPLY_BY_8(1 + MAX_EP_NUM);
+ break;
+ default:
+ lock_id = 0;
+ break;
+ }
+ if (lock_id >= MAX_HARD_SPINLOCK_NUM)
+ lock_id = 0;
+
+ return lock_id;
+}
+
+static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value)
+{
+ *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value;
+}
+
+static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data)
+{
+ *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
+}
+
+static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
+{
+ label_write((uint64_t)label_addr, virt_lock_id, 0);
+ spinlock_write(virt_addr, virt_lock_id, 0);
+ return 0;
+}
+
+/**
+ * Fun: PF init hard_spinlock addr
+ */
+static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
+{
+ int lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_RISC);
+
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + HW_LABEL_OFFSET);
+ lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_VF);
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + HW_LABEL_OFFSET);
+ return 0;
+}
+
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->is_pf)
+ return 0;
+ return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
+}
+
+pthread_spinlock_t chan_lock;
+int zxdh_msg_chan_init(void)
+{
+ uint16_t seq_id = 0;
+
+ g_dev_stat.dev_cnt++;
+ if (g_dev_stat.is_res_init)
+ return BAR_MSG_OK;
+
+ pthread_spin_init(&chan_lock, 0);
+ g_seqid_ring.cur_id = 0;
+ pthread_spin_init(&g_seqid_ring.lock, 0);
+
+ for (seq_id = 0; seq_id < BAR_SEQID_NUM_MAX; seq_id++) {
+ struct seqid_item *reps_info = &(g_seqid_ring.reps_info_tbl[seq_id]);
+
+ reps_info->id = seq_id;
+ reps_info->flag = REPS_INFO_FLAG_USABLE;
+ }
+ g_dev_stat.is_res_init = true;
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_msg_chan_exit(void)
+{
+ if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0))
+ return BAR_MSG_OK;
+
+ g_dev_stat.is_res_init = false;
+ return BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
new file mode 100644
index 0000000000..a619e6ae21
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_MSG_H_
+#define _ZXDH_MSG_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <ethdev_driver.h>
+
+enum DRIVER_TYPE {
+ MSG_CHAN_END_MPF = 0,
+ MSG_CHAN_END_PF,
+ MSG_CHAN_END_VF,
+ MSG_CHAN_END_RISC,
+};
+
+enum BAR_MSG_RTN {
+ BAR_MSG_OK = 0,
+ BAR_MSG_ERR_MSGID,
+ BAR_MSG_ERR_NULL,
+ BAR_MSG_ERR_TYPE, /* Message type exception */
+ BAR_MSG_ERR_MODULE, /* Module ID exception */
+ BAR_MSG_ERR_BODY_NULL, /* Message body exception */
+ BAR_MSG_ERR_LEN, /* Message length exception */
+ BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
+ BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
+ BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
+ BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
+ BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
+ /**
+ * The sending interface parameter boundary structure pointer is empty
+ */
+ BAR_MSG_ERR_NULL_PARA,
+ BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
+ /**
+ * Unable to find the corresponding message processing function for this module
+ */
+ BAR_MSG_ERR_MODULE_NOEXIST,
+ /**
+ * The virtual address in the parameters passed in by the sending interface is empty
+ */
+ BAR_MSG_ERR_VIRTADDR_NULL,
+ BAR_MSG_ERR_REPLY, /* sync msg resp_error */
+ BAR_MSG_ERR_MPF_NOT_SCANNED,
+ BAR_MSG_ERR_KERNEL_READY,
+ BAR_MSG_ERR_USR_RET_ERR,
+ BAR_MSG_ERR_ERR_PCIEID,
+ BAR_MSG_ERR_SOCKET, /* netlink sockte err */
+};
+
+int zxdh_msg_chan_init(void);
+int zxdh_bar_msg_chan_exit(void);
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_MSG_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 17144 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 5/9] net/zxdh: add msg chan enable implementation
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
` (3 subsequent siblings)
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 27228 bytes --]
Add msg chan enable implementation to support
send msg to get infos.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 6 +
drivers/net/zxdh/zxdh_ethdev.h | 12 +
drivers/net/zxdh/zxdh_msg.c | 655 ++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_msg.h | 127 +++++++
4 files changed, 796 insertions(+), 4 deletions(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 66b57c4e59..d95ab4471a 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_msg_chan_enable(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 24eb3a5ca0..a51181f1ce 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -29,10 +29,22 @@ extern "C" {
#define ZXDH_NUM_BARS 2
+union VPORT {
+ uint16_t vport;
+ struct {
+ uint16_t vfid:8;
+ uint16_t pfid:3;
+ uint16_t vf_flag:1;
+ uint16_t epid:3;
+ uint16_t direct_flag:1;
+ };
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ union VPORT vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
uint64_t host_features;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 4928711ad8..4e4930e5a1 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -35,10 +35,82 @@
#define MAX_EP_NUM (4)
#define MAX_HARD_SPINLOCK_NUM (511)
-#define BAR0_SPINLOCK_OFFSET (0x4000)
-#define FW_SHRD_OFFSET (0x5000)
-#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
-#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
+#define LOCK_PRIMARY_ID_MASK (0x8000)
+/* bar offset */
+#define BAR0_CHAN_RISC_OFFSET (0x2000)
+#define BAR0_CHAN_PFVF_OFFSET (0x3000)
+#define BAR0_SPINLOCK_OFFSET (0x4000)
+#define FW_SHRD_OFFSET (0x5000)
+#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET)
+#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET)
+#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET)
+#define CHAN_PFVF_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_PFVF_OFFSET)
+
+#define REPS_HEADER_LEN_OFFSET 1
+#define REPS_HEADER_PAYLOAD_OFFSET 4
+#define REPS_HEADER_REPLYED 0xff
+
+#define BAR_MSG_CHAN_USABLE 0
+#define BAR_MSG_CHAN_USED 1
+
+#define BAR_MSG_POL_MASK (0x10)
+#define BAR_MSG_POL_OFFSET (4)
+
+#define BAR_ALIGN_WORD_MASK 0xfffffffc
+#define BAR_MSG_VALID_MASK 1
+#define BAR_MSG_VALID_OFFSET 0
+
+#define REPS_INFO_FLAG_USABLE 0x00
+#define REPS_INFO_FLAG_USED 0xa0
+
+#define BAR_PF_NUM 7
+#define BAR_VF_NUM 256
+#define BAR_INDEX_PF_TO_VF 0
+#define BAR_INDEX_MPF_TO_MPF 0xff
+#define BAR_INDEX_MPF_TO_PFVF 0
+#define BAR_INDEX_PFVF_TO_MPF 0
+
+#define MAX_HARD_SPINLOCK_ASK_TIMES (1000)
+#define SPINLOCK_POLLING_SPAN_US (100)
+
+#define BAR_MSG_SRC_NUM 3
+#define BAR_MSG_SRC_MPF 0
+#define BAR_MSG_SRC_PF 1
+#define BAR_MSG_SRC_VF 2
+#define BAR_MSG_SRC_ERR 0xff
+#define BAR_MSG_DST_NUM 3
+#define BAR_MSG_DST_RISC 0
+#define BAR_MSG_DST_MPF 2
+#define BAR_MSG_DST_PFVF 1
+#define BAR_MSG_DST_ERR 0xff
+
+#define LOCK_TYPE_HARD (1)
+#define LOCK_TYPE_SOFT (0)
+#define BAR_INDEX_TO_RISC 0
+
+#define BAR_SUBCHAN_INDEX_SEND 0
+#define BAR_SUBCHAN_INDEX_RECV 1
+
+uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND},
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV},
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV, BAR_SUBCHAN_INDEX_RECV}
+};
+
+uint8_t chan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {BAR_INDEX_TO_RISC, BAR_INDEX_MPF_TO_PFVF, BAR_INDEX_MPF_TO_MPF},
+ {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF},
+ {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF}
+};
+
+uint8_t lock_type_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD},
+ {LOCK_TYPE_SOFT, LOCK_TYPE_SOFT, LOCK_TYPE_HARD},
+ {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD}
+};
struct dev_stat {
bool is_mpf_scanned;
@@ -97,6 +169,11 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da
*(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
}
+static uint8_t spinklock_read(uint64_t virt_lock_addr, uint32_t lock_id)
+{
+ return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id);
+}
+
static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
{
label_write((uint64_t)label_addr, virt_lock_id, 0);
@@ -104,6 +181,28 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u
return 0;
}
+static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
+ uint64_t label_addr, uint16_t primary_id)
+{
+ uint32_t lock_rd_cnt = 0;
+
+ do {
+ /* read to lock */
+ uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id);
+
+ if (spl_val == 0) {
+ label_write((uint64_t)label_addr, virt_lock_id, primary_id);
+ break;
+ }
+ rte_delay_us_block(SPINLOCK_POLLING_SPAN_US);
+ lock_rd_cnt++;
+ } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES);
+ if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES)
+ return -1;
+
+ return 0;
+}
+
/**
* Fun: PF init hard_spinlock addr
*/
@@ -119,6 +218,554 @@ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
return 0;
}
+static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid)
+{
+ struct seqid_item *seqid_reps_info = NULL;
+
+ pthread_spin_lock(&g_seqid_ring.lock);
+ uint16_t g_id = g_seqid_ring.cur_id;
+ uint16_t count = 0;
+
+ do {
+ count++;
+ ++g_id;
+ g_id %= BAR_SEQID_NUM_MAX;
+ seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id];
+ } while ((seqid_reps_info->flag != REPS_INFO_FLAG_USABLE) && (count < BAR_SEQID_NUM_MAX));
+ int rc;
+
+ if (count >= BAR_SEQID_NUM_MAX) {
+ rc = -1;
+ goto out;
+ }
+ seqid_reps_info->flag = REPS_INFO_FLAG_USED;
+ g_seqid_ring.cur_id = g_id;
+ *msgid = g_id;
+ rc = BAR_MSG_OK;
+
+out:
+ pthread_spin_unlock(&g_seqid_ring.lock);
+ return rc;
+}
+
+static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id)
+{
+ int ret = zxdh_bar_chan_msgid_allocate(msg_id);
+
+ if (ret != BAR_MSG_OK)
+ return BAR_MSG_ERR_MSGID;
+
+ PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id);
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id];
+
+ reps_info->reps_addr = result->recv_buffer;
+ reps_info->buffer_len = result->buffer_len;
+ return BAR_MSG_OK;
+}
+
+static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src)
+{
+ uint8_t src_index = 0;
+
+ switch (src) {
+ case MSG_CHAN_END_MPF:
+ src_index = BAR_MSG_SRC_MPF;
+ break;
+ case MSG_CHAN_END_PF:
+ src_index = BAR_MSG_SRC_PF;
+ break;
+ case MSG_CHAN_END_VF:
+ src_index = BAR_MSG_SRC_VF;
+ break;
+ default:
+ src_index = BAR_MSG_SRC_ERR;
+ break;
+ }
+ return src_index;
+}
+
+static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst)
+{
+ uint8_t dst_index = 0;
+
+ switch (dst) {
+ case MSG_CHAN_END_MPF:
+ dst_index = BAR_MSG_DST_MPF;
+ break;
+ case MSG_CHAN_END_PF:
+ dst_index = BAR_MSG_DST_PFVF;
+ break;
+ case MSG_CHAN_END_VF:
+ dst_index = BAR_MSG_DST_PFVF;
+ break;
+ case MSG_CHAN_END_RISC:
+ dst_index = BAR_MSG_DST_RISC;
+ break;
+ default:
+ dst_index = BAR_MSG_SRC_ERR;
+ break;
+ }
+ return dst_index;
+}
+
+static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result)
+{
+ uint8_t src_index = 0;
+ uint8_t dst_index = 0;
+
+ if (in == NULL || result == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null para.");
+ return BAR_MSG_ERR_NULL_PARA;
+ }
+ src_index = zxdh_bar_msg_src_index_trans(in->src);
+ dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist.");
+ return BAR_MSG_ERR_TYPE;
+ }
+ if (in->module_id >= BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ if (in->payload_addr == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null message.");
+ return BAR_MSG_ERR_BODY_NULL;
+ }
+ if (in->payload_len > BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len);
+ return BAR_MSG_ERR_LEN;
+ }
+ if (in->virt_addr == 0 || result->recv_buffer == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL.");
+ return BAR_MSG_ERR_VIRTADDR_NULL;
+ }
+ if (result->buffer_len < REPS_HEADER_PAYLOAD_OFFSET)
+ PMD_MSG_LOG(ERR,
+ "recv buffer len: %" PRIu64 " is short than minimal 4 bytes\n",
+ result->buffer_len);
+
+ return BAR_MSG_OK;
+}
+
+static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id)
+{
+ return virt_addr + (2 * chan_id + subchan_id) * BAR_MSG_ADDR_CHAN_INTERVAL;
+}
+
+static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+ uint16_t chan_id = chan_id_tbl[src_index][dst_index];
+ uint16_t subchan_id = subchan_id_tbl[src_index][dst_index];
+
+ *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id);
+ return BAR_MSG_OK;
+}
+
+static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u\n", src_pcieid, lockid);
+ if (dst == MSG_CHAN_END_RISC)
+ ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + CHAN_RISC_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+ else
+ ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + CHAN_PFVF_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+
+ return ret;
+}
+
+static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u\n", src_pcieid, lockid);
+ if (dst == MSG_CHAN_END_RISC)
+ zxdh_spinlock_unlock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + CHAN_RISC_LABEL_OFFSET);
+ else
+ zxdh_spinlock_unlock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + CHAN_PFVF_LABEL_OFFSET);
+}
+
+pthread_spinlock_t chan_lock;
+static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.\n");
+ return BAR_MSG_ERR_TYPE;
+ }
+ uint16_t idx = lock_type_tbl[src_index][dst_index];
+
+ if (idx == LOCK_TYPE_SOFT)
+ pthread_spin_lock(&chan_lock);
+ else
+ ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr);
+
+ if (ret != 0)
+ PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.\n", src_pcieid);
+
+ return ret;
+}
+
+static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.\n");
+ return BAR_MSG_ERR_TYPE;
+ }
+ uint16_t idx = lock_type_tbl[src_index][dst_index];
+
+ if (idx == LOCK_TYPE_SOFT)
+ pthread_spin_unlock(&chan_lock);
+ else
+ zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr);
+
+ return BAR_MSG_OK;
+}
+
+static void zxdh_bar_chan_msgid_free(uint16_t msg_id)
+{
+ struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ pthread_spin_lock(&g_seqid_ring.lock);
+ seqid_reps_info->flag = REPS_INFO_FLAG_USABLE;
+ PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id);
+ pthread_spin_unlock(&g_seqid_ring.lock);
+}
+
+static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset,
+ uint32_t data)
+{
+ uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32,
+ subchan_addr, algin_offset);
+ return -1;
+ }
+ *(uint32_t *)(subchan_addr + algin_offset) = data;
+ return 0;
+}
+
+static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset,
+ uint32_t *pdata)
+{
+ uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32,
+ subchan_addr, algin_offset);
+ return -1;
+ }
+ *pdata = *(uint32_t *)(subchan_addr + algin_offset);
+ return 0;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr,
+ struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx));
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr,
+ struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx);
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg,
+ uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET,
+ *(data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ for (ix = 0; ix < remain; ix++)
+ remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix);
+
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * count +
+ BAR_MSG_PLAYLOAD_OFFSET, remain_data);
+ }
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, (data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * count +
+ BAR_MSG_PLAYLOAD_OFFSET, &remain_data);
+ for (ix = 0; ix < remain; ix++)
+ *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix);
+ }
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ data &= (~BAR_MSG_VALID_MASK);
+ data |= (uint32_t)valid_label;
+ zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data);
+ return BAR_MSG_OK;
+}
+
+static uint8_t temp_msg[BAR_MSG_ADDR_CHAN_INTERVAL];
+static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr,
+ uint16_t payload_len, struct bar_msg_header *msg_header)
+{
+ uint16_t ret = 0;
+ ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header);
+
+ ret = zxdh_bar_chan_msg_header_get(subchan_addr,
+ (struct bar_msg_header *)temp_msg);
+
+ ret = zxdh_bar_chan_msg_payload_set(subchan_addr,
+ (uint8_t *)(payload_addr), payload_len);
+
+ ret = zxdh_bar_chan_msg_payload_get(subchan_addr,
+ temp_msg, payload_len);
+
+ ret = zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USED);
+ return ret;
+}
+
+static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ if (BAR_MSG_CHAN_USABLE == (data & BAR_MSG_VALID_MASK))
+ return BAR_MSG_CHAN_USABLE;
+
+ return BAR_MSG_CHAN_USED;
+}
+
+static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ data &= (~(uint32_t)BAR_MSG_POL_MASK);
+ data |= ((uint32_t)label << BAR_MSG_POL_OFFSET);
+ zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data);
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr,
+ uint64_t recv_buffer, uint16_t buffer_len)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t msg_id = 0;
+ uint16_t msg_len = 0;
+
+ zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header);
+ msg_id = msg_header.msg_id;
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ if (reps_info->flag != REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id);
+ return BAR_MSG_ERR_REPLY;
+ }
+ msg_len = msg_header.len;
+
+ if (msg_len > buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u",
+ buffer_len, msg_len + 4);
+ return BAR_MSG_ERR_REPSBUFF_LEN;
+ }
+ uint8_t *recv_msg = (uint8_t *)recv_buffer;
+
+ zxdh_bar_chan_msg_payload_get(subchan_addr,
+ recv_msg + REPS_HEADER_PAYLOAD_OFFSET, msg_len);
+ *(uint16_t *)(recv_msg + REPS_HEADER_LEN_OFFSET) = msg_len;
+ *recv_msg = REPS_HEADER_REPLYED; /* set reps's valid */
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t seq_id = 0;
+ uint64_t subchan_addr = 0;
+ uint32_t time_out_cnt = 0;
+ uint16_t valid = 0;
+ int ret = 0;
+
+ ret = zxdh_bar_chan_send_para_check(in, result);
+ if (ret != BAR_MSG_OK)
+ goto exit;
+
+ ret = zxdh_bar_chan_save_recv_info(result, &seq_id);
+ if (ret != BAR_MSG_OK)
+ goto exit;
+
+ zxdh_bar_chan_subchan_addr_get(in, &subchan_addr);
+
+ msg_header.sync = BAR_CHAN_MSG_SYNC;
+ msg_header.emec = in->emec;
+ msg_header.usr = 0;
+ msg_header.rsv = 0;
+ msg_header.module_id = in->module_id;
+ msg_header.len = in->payload_len;
+ msg_header.msg_id = seq_id;
+ msg_header.src_pcieid = in->src_pcieid;
+ msg_header.dst_pcieid = in->dst_pcieid;
+
+ ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+ if (ret != BAR_MSG_OK) {
+ zxdh_bar_chan_msgid_free(seq_id);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header);
+
+ do {
+ rte_delay_us_block(BAR_MSG_POLLING_SPAN);
+ valid = zxdh_bar_msg_valid_stat_get(subchan_addr);
+ ++time_out_cnt;
+ } while ((time_out_cnt < BAR_MSG_TIMEOUT_TH) && (valid == BAR_MSG_CHAN_USED));
+
+ if ((time_out_cnt == BAR_MSG_TIMEOUT_TH) && (valid != BAR_MSG_CHAN_USABLE)) {
+ zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USABLE);
+ zxdh_bar_chan_msg_poltag_set(subchan_addr, 0);
+ PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out.");
+ ret = BAR_MSG_ERR_TIME_OUT;
+ } else {
+ ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr,
+ (uint64_t)result->recv_buffer, result->buffer_len);
+ }
+ zxdh_bar_chan_msgid_free(seq_id);
+ zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+
+exit:
+ return ret;
+}
+
+static int bar_get_sum(uint8_t *ptr, uint8_t len)
+{
+ uint64_t sum = 0;
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ sum += *(ptr + idx);
+
+ return (uint16_t)sum;
+}
+
+static int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport)
+{
+ struct bar_recv_msg recv_msg = {0};
+ int ret = 0;
+ int check_token = 0;
+ int sum_res = 0;
+
+ if (!_msix_para)
+ return BAR_MSG_ERR_NULL;
+
+ struct msix_msg msix_msg = {
+ .pcie_id = _msix_para->pcie_id,
+ .vector_risc = _msix_para->vector_risc,
+ .vector_pfvf = _msix_para->vector_pfvf,
+ .vector_mpf = _msix_para->vector_mpf,
+ };
+ struct zxdh_pci_bar_msg in = {
+ .virt_addr = _msix_para->virt_addr,
+ .payload_addr = &msix_msg,
+ .payload_len = sizeof(msix_msg),
+ .emec = 0,
+ .src = _msix_para->driver_type,
+ .dst = MSG_CHAN_END_RISC,
+ .module_id = BAR_MODULE_MISX,
+ .src_pcieid = _msix_para->pcie_id,
+ .dst_pcieid = 0,
+ .usr = 0,
+ };
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+ if (ret != BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.msix_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.\n", sum_res, check_token);
+ return BAR_MSG_ERR_REPLY;
+ }
+ *vport = recv_msg.msix_reps.vport;
+ PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.\n", _msix_para->pcie_id);
+ return BAR_MSG_OK;
+}
+
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct msix_para misx_info = {
+ .vector_risc = MSIX_FROM_RISCV,
+ .vector_pfvf = MSIX_FROM_PFVF,
+ .vector_mpf = MSIX_FROM_MPF,
+ .pcie_id = hw->pcie_id,
+ .driver_type = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF,
+ .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET),
+ };
+
+ return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
+}
+
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index a619e6ae21..88d27756e2 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -13,6 +13,19 @@ extern "C" {
#include <ethdev_driver.h>
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+
+#define BAR_MSG_POLLING_SPAN 100 /* sleep us */
+#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
+#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
+#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) /* 10s */
+
+#define BAR_CHAN_MSG_SYNC 0
+
+#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
+#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header))
+#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header))
+
enum DRIVER_TYPE {
MSG_CHAN_END_MPF = 0,
MSG_CHAN_END_PF,
@@ -20,6 +33,13 @@ enum DRIVER_TYPE {
MSG_CHAN_END_RISC,
};
+enum MSG_VEC {
+ MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
+ MSIX_FROM_MPF,
+ MSIX_FROM_RISCV,
+ MSG_VEC_NUM,
+};
+
enum BAR_MSG_RTN {
BAR_MSG_OK = 0,
BAR_MSG_ERR_MSGID,
@@ -54,10 +74,117 @@ enum BAR_MSG_RTN {
BAR_MSG_ERR_SOCKET, /* netlink sockte err */
};
+enum bar_module_id {
+ BAR_MODULE_DBG = 0, /* 0: debug */
+ BAR_MODULE_TBL, /* 1: resource table */
+ BAR_MODULE_MISX, /* 2: config msix */
+ BAR_MODULE_SDA, /* 3: */
+ BAR_MODULE_RDMA, /* 4: */
+ BAR_MODULE_DEMO, /* 5: channel test */
+ BAR_MODULE_SMMU, /* 6: */
+ BAR_MODULE_MAC, /* 7: mac rx/tx stats */
+ BAR_MODULE_VDPA, /* 8: vdpa live migration */
+ BAR_MODULE_VQM, /* 9: vqm live migration */
+ BAR_MODULE_NP, /* 10: vf msg callback np */
+ BAR_MODULE_VPORT, /* 11: get vport */
+ BAR_MODULE_BDF, /* 12: get bdf */
+ BAR_MODULE_RISC_READY, /* 13: */
+ BAR_MODULE_REVERSE, /* 14: byte stream reverse */
+ BAR_MDOULE_NVME, /* 15: */
+ BAR_MDOULE_NPSDK, /* 16: */
+ BAR_MODULE_NP_TODO, /* 17: */
+ MODULE_BAR_MSG_TO_PF, /* 18: */
+ MODULE_BAR_MSG_TO_VF, /* 19: */
+
+ MODULE_FLASH = 32,
+ BAR_MODULE_OFFSET_GET = 33,
+ BAR_EVENT_OVS_WITH_VCB = 36,
+
+ BAR_MSG_MODULE_NUM = 100,
+};
+
+struct msix_para {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+ uint64_t virt_addr;
+ uint16_t driver_type; /* refer to DRIVER_TYPE */
+};
+
+struct msix_msg {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+};
+
+struct zxdh_pci_bar_msg {
+ uint64_t virt_addr; /* bar addr */
+ void *payload_addr;
+ uint16_t payload_len;
+ uint16_t emec;
+ uint16_t src; /* refer to BAR_DRIVER_TYPE */
+ uint16_t dst; /* refer to BAR_DRIVER_TYPE */
+ uint16_t module_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid;
+ uint16_t usr;
+};
+
+struct bar_msix_reps {
+ uint16_t pcie_id;
+ uint16_t check;
+ uint16_t vport;
+ uint16_t rsv;
+} __rte_packed;
+
+struct bar_offset_reps {
+ uint16_t check;
+ uint16_t rsv;
+ uint32_t offset;
+ uint32_t length;
+} __rte_packed;
+
+struct bar_recv_msg {
+ uint8_t reps_ok;
+ uint16_t reps_len;
+ uint8_t rsv;
+ /* */
+ union {
+ struct bar_msix_reps msix_reps;
+ struct bar_offset_reps offset_reps;
+ } __rte_packed;
+} __rte_packed;
+
+struct zxdh_msg_recviver_mem {
+ void *recv_buffer; /* first 4B is head, followed by payload */
+ uint64_t buffer_len;
+};
+
+struct bar_msg_header {
+ uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
+ uint8_t sync : 1;
+ uint8_t emec : 1; /* emergency? */
+ uint8_t ack : 1; /* ack msg? */
+ uint8_t poll : 1;
+ uint8_t usr : 1;
+ uint8_t rsv;
+ uint16_t module_id;
+ uint16_t len;
+ uint16_t msg_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid; /* used in PF-->VF */
+};
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result);
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 59822 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
` (2 preceding siblings ...)
2024-10-15 5:44 ` [PATCH v5 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
` (2 subsequent siblings)
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 13323 bytes --]
Add zxdh get device backend infos,
use msg chan to send msg get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 249 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_common.h | 30 ++++
drivers/net/zxdh/zxdh_ethdev.c | 35 +++++
drivers/net/zxdh/zxdh_ethdev.h | 5 +
drivers/net/zxdh/zxdh_msg.c | 3 -
drivers/net/zxdh/zxdh_msg.h | 27 +++-
7 files changed, 346 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 9d0b5b9fd3..9aec47e68f 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -17,4 +17,5 @@ sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
'zxdh_msg.c',
+ 'zxdh_common.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
new file mode 100644
index 0000000000..140d0f2322
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <string.h>
+
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+#include "zxdh_common.h"
+
+#define ZXDH_MSG_RSP_SIZE_MAX 512
+
+#define ZXDH_COMMON_TABLE_READ 0
+#define ZXDH_COMMON_TABLE_WRITE 1
+
+#define ZXDH_COMMON_FIELD_PHYPORT 6
+
+#define RSC_TBL_CONTENT_LEN_MAX (257 * 2)
+
+#define REPS_HEADER_PAYLOAD_OFFSET 4
+#define TBL_MSG_PRO_SUCCESS 0xaa
+
+struct zxdh_common_msg {
+ uint8_t type; /* 0:read table 1:write table */
+ uint8_t field;
+ uint16_t pcie_id;
+ uint16_t slen; /* Data length for write table */
+ uint16_t reserved;
+} __rte_packed;
+
+struct zxdh_common_rsp_hdr {
+ uint8_t rsp_status;
+ uint16_t rsp_len;
+ uint8_t reserved;
+ uint8_t payload_status;
+ uint8_t rsv;
+ uint16_t payload_len;
+} __rte_packed;
+
+struct tbl_msg_header {
+ uint8_t type; /* r/w */
+ uint8_t field;
+ uint16_t pcieid;
+ uint16_t slen;
+ uint16_t rsv;
+};
+struct tbl_msg_reps_header {
+ uint8_t check;
+ uint8_t rsv;
+ uint16_t len;
+};
+
+static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ uint8_t type,
+ uint8_t field,
+ void *buff,
+ uint16_t buff_size)
+{
+ uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size;
+
+ desc->payload_addr = rte_zmalloc(NULL, msg_len, 0);
+ if (unlikely(desc->payload_addr == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate msg_data");
+ return -ENOMEM;
+ }
+ memset(desc->payload_addr, 0, msg_len);
+ desc->payload_len = msg_len;
+ struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr;
+
+ msg_data->type = type;
+ msg_data->field = field;
+ msg_data->pcie_id = hw->pcie_id;
+ msg_data->slen = buff_size;
+ if (buff_size != 0)
+ rte_memcpy(msg_data + 1, buff, buff_size);
+
+ return 0;
+}
+
+static int32_t zxdh_send_command(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ enum bar_module_id module_id,
+ struct zxdh_msg_recviver_mem *msg_rsp)
+{
+ desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ desc->src = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF;
+ desc->dst = MSG_CHAN_END_RISC;
+ desc->module_id = module_id;
+ desc->src_pcieid = hw->pcie_id;
+
+ msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX;
+ msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0);
+ if (unlikely(msg_rsp->recv_buffer == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate messages response");
+ return -ENOMEM;
+ }
+
+ if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
+ rte_free(msg_rsp->recv_buffer);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp,
+ void *buff, uint16_t len)
+{
+ struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer;
+
+ if ((rsp_hdr->payload_status != 0xaa) || (rsp_hdr->payload_len != len)) {
+ PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d",
+ rsp_hdr->payload_status, rsp_hdr->payload_len);
+ return -1;
+ }
+ if (len != 0)
+ rte_memcpy(buff, rsp_hdr + 1, len);
+
+ return 0;
+}
+
+static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_msg_recviver_mem msg_rsp;
+ struct zxdh_pci_bar_msg desc;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT,
+ (void *)phyport, sizeof(*phyport));
+ return ret;
+}
+
+static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ param->pcie_id = hw->pcie_id;
+ param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
+ param->src_type = BAR_MODULE_TBL;
+}
+
+static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len)
+{
+ if (!res || !dev)
+ return BAR_MSG_ERR_NULL;
+
+ struct tbl_msg_header tbl_msg = {
+ .type = TBL_TYPE_READ,
+ .field = field,
+ .pcieid = dev->pcie_id,
+ .slen = 0,
+ .rsv = 0,
+ };
+
+ struct zxdh_pci_bar_msg in = {0};
+
+ in.virt_addr = dev->virt_addr;
+ in.payload_addr = &tbl_msg;
+ in.payload_len = sizeof(tbl_msg);
+ in.src = dev->src_type;
+ in.dst = MSG_CHAN_END_RISC;
+ in.module_id = BAR_MODULE_TBL;
+ in.src_pcieid = dev->pcie_id;
+
+ uint8_t recv_buf[RSC_TBL_CONTENT_LEN_MAX + 8] = {0};
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = recv_buf,
+ .buffer_len = sizeof(recv_buf),
+ };
+ int ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR,
+ "send sync_msg failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret);
+ return ret;
+ }
+ struct tbl_msg_reps_header *tbl_reps =
+ (struct tbl_msg_reps_header *)(recv_buf + REPS_HEADER_PAYLOAD_OFFSET);
+
+ if (tbl_reps->check != TBL_MSG_PRO_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "get resource_field failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret);
+ return ret;
+ }
+ *len = tbl_reps->len;
+ memcpy(res,
+ (recv_buf + REPS_HEADER_PAYLOAD_OFFSET + sizeof(struct tbl_msg_reps_header)), *len);
+ return ret;
+}
+
+static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, TBL_FIELD_PNLID, &reps, &reps_len) != BAR_MSG_OK)
+ return -1;
+
+ *panel_id = reps;
+ return BAR_MSG_OK;
+}
+
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
+{
+ struct zxdh_res_para param;
+
+ zxdh_fill_res_para(dev, ¶m);
+ int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
new file mode 100644
index 0000000000..ec7011e820
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_COMMON_H_
+#define _ZXDH_COMMON_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+struct zxdh_res_para {
+ uint64_t virt_addr;
+ uint16_t pcie_id;
+ uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
+};
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_COMMON_H_ */
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index d95ab4471a..ee2e1c0d5d 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -10,9 +10,21 @@
#include "zxdh_logs.h"
#include "zxdh_pci.h"
#include "zxdh_msg.h"
+#include "zxdh_common.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+uint16_t vport_to_vfid(union VPORT v)
+{
+ /* epid > 4 is local soft queue. return 1192 */
+ if (v.epid > 4)
+ return 1192;
+ if (v.vf_flag)
+ return v.epid * 256 + v.vfid;
+ else
+ return (v.epid * 8 + v.pfid) + 1152;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
return ret;
}
+static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw)
+{
+ if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get phyport");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport);
+
+ hw->vfid = vport_to_vfid(hw->vport);
+
+ if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get panel_id");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get pannel id success: 0x%x", hw->panel_id);
+
+ return 0;
+}
+
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_agent_comm(eth_dev, hw);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index a51181f1ce..2351393009 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -56,6 +56,7 @@ struct zxdh_hw {
uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint16_t vfid;
uint8_t *isr;
uint8_t weak_barriers;
@@ -63,9 +64,13 @@ struct zxdh_hw {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
+ uint8_t phyport;
+ uint8_t panel_id;
uint8_t msg_chan_init;
};
+uint16_t vport_to_vfid(union VPORT v);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 4e4930e5a1..5a652a5f39 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -18,8 +18,6 @@
#define REPS_INFO_FLAG_USABLE 0x00
#define BAR_SEQID_NUM_MAX 256
-#define ZXDH_BAR0_INDEX 0
-
#define PCIEID_IS_PF_MASK (0x0800)
#define PCIEID_PF_IDX_MASK (0x0700)
#define PCIEID_VF_IDX_MASK (0x00ff)
@@ -43,7 +41,6 @@
#define FW_SHRD_OFFSET (0x5000)
#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
-#define ZXDH_CTRLCH_OFFSET (0x2000)
#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET)
#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET)
#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET)
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 88d27756e2..5b599f8f6a 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -13,9 +13,13 @@ extern "C" {
#include <ethdev_driver.h>
+#define ZXDH_BAR0_INDEX 0
+
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
-#define BAR_MSG_POLLING_SPAN 100 /* sleep us */
+#define BAR_MSG_POLLING_SPAN 100
#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) /* 10s */
@@ -103,6 +107,27 @@ enum bar_module_id {
BAR_MSG_MODULE_NUM = 100,
};
+enum RES_TBL_FILED {
+ TBL_FIELD_PCIEID = 0,
+ TBL_FIELD_BDF = 1,
+ TBL_FIELD_MSGCH = 2,
+ TBL_FIELD_DATACH = 3,
+ TBL_FIELD_VPORT = 4,
+ TBL_FIELD_PNLID = 5,
+ TBL_FIELD_PHYPORT = 6,
+ TBL_FIELD_SERDES_NUM = 7,
+ TBL_FIELD_NP_PORT = 8,
+ TBL_FIELD_SPEED = 9,
+ TBL_FIELD_HASHID = 10,
+ TBL_FIELD_NON,
+};
+
+enum TBL_MSG_TYPE {
+ TBL_TYPE_READ,
+ TBL_TYPE_WRITE,
+ TBL_TYPE_NON,
+};
+
struct msix_para {
uint16_t pcie_id;
uint16_t vector_risc;
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 27954 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
` (3 preceding siblings ...)
2024-10-15 5:44 ` [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 24346 bytes --]
configure zxdh intr include risc,dtb. and release intr.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 302 ++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_ethdev.h | 8 +
drivers/net/zxdh/zxdh_msg.c | 187 ++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 11 ++
drivers/net/zxdh/zxdh_pci.c | 62 +++++++
drivers/net/zxdh/zxdh_pci.h | 12 ++
6 files changed, 581 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index ee2e1c0d5d..4f6711c9af 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -25,6 +25,302 @@ uint16_t vport_to_vfid(union VPORT v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
+ }
+}
+
+
+static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (rte_intr_ack(dev->intr_handle) < 0)
+ return -1;
+
+ hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
+
+ return 0;
+}
+
+static void zxdh_devconf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+
+ if (zxdh_intr_unmask(dev) < 0)
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+}
+
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_fromriscv_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler\n");
+ zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler\n");
+ zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_frompfvf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler\n");
+ zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler");
+ zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
+
+ tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
+ tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled)
+ return 0;
+
+ zxdh_intr_cb_unreg(dev);
+ if (rte_intr_disable(dev->intr_handle) < 0)
+ return -1;
+
+ hw->intr_enabled = 0;
+ return 0;
+}
+
+static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled) {
+ zxdh_intr_cb_reg(dev);
+ ret = rte_intr_enable(dev->intr_handle);
+ if (unlikely(ret))
+ PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
+
+ hw->intr_enabled = 1;
+ }
+ return ret;
+}
+
+static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
+
+ zxdh_queues_unbind_intr(dev);
+ zxdh_intr_disable(dev);
+
+ rte_intr_efd_disable(dev->intr_handle);
+ rte_intr_vec_list_free(dev->intr_handle);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return 0;
+}
+
+static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint8_t i;
+
+ if (!hw->risc_intr) {
+ PMD_INIT_LOG(ERR, " to allocate risc_intr");
+ hw->risc_intr = rte_zmalloc("risc_intr",
+ ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
+ if (hw->risc_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
+ return -ENOMEM;
+ }
+ }
+
+ for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
+ if (dev->intr_handle->efds[i] < 0) {
+ PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ return -1;
+ }
+
+ struct rte_intr_handle *intr_handle = hw->risc_intr + i;
+
+ intr_handle->fd = dev->intr_handle->efds[i];
+ intr_handle->type = dev->intr_handle->type;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->dtb_intr) {
+ hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
+ if (hw->dtb_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
+ return -ENOMEM;
+ }
+ }
+
+ if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
+ PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return -1;
+ }
+ hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
+ hw->dtb_intr->type = dev->intr_handle->type;
+ return 0;
+}
+
+static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+ uint16_t vec;
+
+ if (!dev->data->dev_conf.intr_conf.rxq) {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ i * 2, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ } else {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d",
+ i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec);
+ }
+ }
+ /* mask all txq intr */
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ return 0;
+}
+
+static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t ret = 0;
+
+ if (!rte_intr_cap_multiple(dev->intr_handle)) {
+ PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
+ return -ENOTSUP;
+ }
+ zxdh_intr_release(dev);
+ uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
+
+ if (dev->data->dev_conf.intr_conf.rxq)
+ nb_efd += dev->data->nb_rx_queues;
+
+ if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
+ PMD_INIT_LOG(ERR, "Fail to create eventfd");
+ return -1;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
+ if (zxdh_setup_risc_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ if (zxdh_setup_dtb_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_queues_bind_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_intr_enable(dev) < 0) {
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ return 0;
+
+free_intr_vec:
+ zxdh_intr_release(dev);
+ return ret;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -138,10 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret != 0)
goto err_zxdh_init;
+ ret = zxdh_configure_intr(eth_dev);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
- zxdh_bar_msg_chan_exit();
+ zxdh_intr_release(eth_dev);
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
return ret;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 2351393009..7c5f5940cb 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -11,6 +11,10 @@ extern "C" {
#include <rte_ether.h>
#include "ethdev_driver.h"
+#include <rte_interrupts.h>
+#include <eal_interrupts.h>
+
+#include "zxdh_queue.h"
/* ZXDH PCI vendor/device ID. */
#define PCI_VENDOR_ID_ZTE 0x1cf2
@@ -44,6 +48,9 @@ struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ struct rte_intr_handle *risc_intr;
+ struct rte_intr_handle *dtb_intr;
+ struct virtqueue **vqs;
union VPORT vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -60,6 +67,7 @@ struct zxdh_hw {
uint8_t *isr;
uint8_t weak_barriers;
+ uint8_t intr_enabled;
uint8_t use_msix;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 5a652a5f39..2a1228288f 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -91,6 +91,12 @@
#define BAR_SUBCHAN_INDEX_SEND 0
#define BAR_SUBCHAN_INDEX_RECV 1
+#define BAR_CHAN_MSG_SYNC 0
+#define BAR_CHAN_MSG_NO_EMEC 0
+#define BAR_CHAN_MSG_EMEC 1
+#define BAR_CHAN_MSG_NO_ACK 0
+#define BAR_CHAN_MSG_ACK 1
+
uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
{BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND},
{BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV},
@@ -130,6 +136,36 @@ struct seqid_ring {
};
struct seqid_ring g_seqid_ring = {0};
+static inline const char *module_id_name(int val)
+{
+ switch (val) {
+ case BAR_MODULE_DBG: return "BAR_MODULE_DBG";
+ case BAR_MODULE_TBL: return "BAR_MODULE_TBL";
+ case BAR_MODULE_MISX: return "BAR_MODULE_MISX";
+ case BAR_MODULE_SDA: return "BAR_MODULE_SDA";
+ case BAR_MODULE_RDMA: return "BAR_MODULE_RDMA";
+ case BAR_MODULE_DEMO: return "BAR_MODULE_DEMO";
+ case BAR_MODULE_SMMU: return "BAR_MODULE_SMMU";
+ case BAR_MODULE_MAC: return "BAR_MODULE_MAC";
+ case BAR_MODULE_VDPA: return "BAR_MODULE_VDPA";
+ case BAR_MODULE_VQM: return "BAR_MODULE_VQM";
+ case BAR_MODULE_NP: return "BAR_MODULE_NP";
+ case BAR_MODULE_VPORT: return "BAR_MODULE_VPORT";
+ case BAR_MODULE_BDF: return "BAR_MODULE_BDF";
+ case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY";
+ case BAR_MODULE_REVERSE: return "BAR_MODULE_REVERSE";
+ case BAR_MDOULE_NVME: return "BAR_MDOULE_NVME";
+ case BAR_MDOULE_NPSDK: return "BAR_MDOULE_NPSDK";
+ case BAR_MODULE_NP_TODO: return "BAR_MODULE_NP_TODO";
+ case MODULE_BAR_MSG_TO_PF: return "MODULE_BAR_MSG_TO_PF";
+ case MODULE_BAR_MSG_TO_VF: return "MODULE_BAR_MSG_TO_VF";
+ case MODULE_FLASH: return "MODULE_FLASH";
+ case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET";
+ case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB";
+ default: return "NA";
+ }
+}
+
static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
@@ -803,3 +839,154 @@ int zxdh_bar_msg_chan_exit(void)
g_dev_stat.is_res_init = false;
return BAR_MSG_OK;
}
+
+static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+
+ return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+}
+
+static void zxdh_bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header,
+ uint8_t *receiver_buff)
+{
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
+
+ if (reps_info->flag != REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id);
+ return;
+ }
+ if (msg_header->len > reps_info->buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u",
+ reps_info->buffer_len, msg_header->len + 4);
+ goto free_id;
+ }
+ uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr;
+
+ rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len);
+ *(uint16_t *)(reps_buffer + 1) = msg_header->len;
+ *(uint8_t *)(reps_info->reps_addr) = REPS_HEADER_REPLYED;
+
+free_id:
+ zxdh_bar_chan_msgid_free(msg_header->msg_id);
+}
+
+zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[BAR_MSG_MODULE_NUM];
+static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header,
+ uint8_t *receiver_buff, void *dev)
+{
+ uint8_t *reps_buffer = rte_malloc(NULL, BAR_MSG_PAYLOAD_MAX_LEN, 0);
+
+ if (reps_buffer == NULL)
+ return;
+
+ zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id];
+ uint16_t reps_len = 0;
+
+ recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev);
+ msg_header->ack = BAR_CHAN_MSG_ACK;
+ msg_header->len = reps_len;
+ zxdh_bar_chan_msg_header_set(reply_addr, msg_header);
+ zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len);
+ zxdh_bar_chan_msg_valid_set(reply_addr, BAR_MSG_CHAN_USABLE);
+ rte_free(reps_buffer);
+}
+
+static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type,
+ uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+ uint64_t recv_rep_addr;
+
+ if (sync == BAR_CHAN_MSG_SYNC)
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+ else
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id);
+
+ return recv_rep_addr;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_check(struct bar_msg_header *msg_header)
+{
+ if (msg_header->valid != BAR_MSG_CHAN_USED) {
+ PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used.");
+ return BAR_MSG_ERR_MODULE;
+ }
+ uint8_t module_id = msg_header->module_id;
+
+ if (module_id >= (uint8_t)BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ uint16_t len = msg_header->len;
+
+ if (len > BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len);
+ return BAR_MSG_ERR_LEN;
+ }
+ if (msg_recv_func_tbl[msg_header->module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_ERR_MODULE_NOEXIST;
+ }
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev)
+{
+ struct bar_msg_header msg_header;
+ uint64_t recv_addr = 0;
+ uint16_t ret = 0;
+
+ recv_addr = zxdh_recv_addr_get(src, dst, virt_addr);
+ if (recv_addr == 0) {
+ PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst);
+ return -1;
+ }
+
+ zxdh_bar_chan_msg_header_get(recv_addr, &msg_header);
+ ret = zxdh_bar_chan_msg_header_check(&msg_header);
+
+ if (ret != BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret);
+ return -1;
+ }
+
+ uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0);
+ if (recved_msg == NULL) {
+ PMD_MSG_LOG(ERR, "malloc temp buff failed.");
+ return -1;
+ }
+ zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len);
+
+ uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr);
+
+ if (msg_header.sync == BAR_CHAN_MSG_SYNC) {
+ zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_valid_set(recv_addr, BAR_MSG_CHAN_USABLE);
+ if (msg_header.ack == BAR_CHAN_MSG_ACK) {
+ zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg);
+ goto exit;
+ }
+ return 0;
+
+exit:
+ rte_free(recved_msg);
+ return BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 5b599f8f6a..2e4e6d3ba6 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -16,8 +16,15 @@ extern "C" {
#define ZXDH_BAR0_INDEX 0
#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000)
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3
+#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM)
+#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1
+#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1)
+#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) /* 5 */
+#define ZXDH_QUEUE_INTR_VEC_NUM 256
#define BAR_MSG_POLLING_SPAN 100
#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
@@ -202,6 +209,9 @@ struct bar_msg_header {
uint16_t dst_pcieid; /* used in PF-->VF */
};
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len,
+ void *reps_buffer, uint16_t *reps_len, void *dev);
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
@@ -209,6 +219,7 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
struct zxdh_msg_recviver_mem *result);
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
#ifdef __cplusplus
}
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index 73ec640b84..1b953c7d0a 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -96,6 +96,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
rte_write32(features >> 32, &hw->common_cfg->guest_feature);
}
+static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec)
+{
+ rte_write16(vec, &hw->common_cfg->msix_config);
+ return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+ rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+ return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
+{
+ return rte_read8(hw->isr);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -103,8 +121,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_status = zxdh_set_status,
.get_features = zxdh_get_features,
.set_features = zxdh_set_features,
+ .set_queue_irq = zxdh_set_queue_irq,
+ .set_config_irq = zxdh_set_config_irq,
+ .get_isr = zxdh_get_isr,
};
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_isr(hw);
+}
+
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
{
return VTPCI_OPS(hw)->get_features(hw);
@@ -288,3 +314,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
return 0;
}
+
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev)
+{
+ uint8_t pos = 0;
+ int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret);
+ return ZXDH_MSIX_NONE;
+ }
+ while (pos) {
+ uint8_t cap[2] = {0};
+
+ ret = rte_pci_read_config(dev, cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap[0] == PCI_CAP_ID_MSIX) {
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap));
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR,
+ "failed to read pci cap at pos: %x ret %d", pos + 2, ret);
+ break;
+ }
+ if (flags & PCI_MSIX_ENABLE)
+ return ZXDH_MSIX_ENABLED;
+ else
+ return ZXDH_MSIX_DISABLED;
+ }
+ pos = cap[1];
+ }
+ return ZXDH_MSIX_NONE;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index deda73a65a..677dadd5c8 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -22,6 +22,13 @@ enum zxdh_msix_status {
ZXDH_MSIX_ENABLED = 2
};
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define ZXDH_PCI_ISR_INTR 0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define ZXDH_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define ZXDH_MSI_NO_VECTOR 0x7F
+
#define PCI_CAPABILITY_LIST 0x34
#define PCI_CAP_ID_VNDR 0x09
#define PCI_CAP_ID_MSIX 0x11
@@ -124,6 +131,9 @@ struct zxdh_pci_ops {
uint64_t (*get_features)(struct zxdh_hw *hw);
void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+ uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
+ uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
+ uint8_t (*get_isr)(struct zxdh_hw *hw);
};
struct zxdh_hw_internal {
@@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
#ifdef __cplusplus
}
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53002 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
` (4 preceding siblings ...)
2024-10-15 5:44 ` [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3892 bytes --]
Add support for zxdh infos get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 62 +++++++++++++++++++++++++++++++++-
1 file changed, 61 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 4f6711c9af..e0f2c1985b 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -12,6 +12,9 @@
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#define ZXDH_MIN_RX_BUFSIZE 64
+#define ZXDH_MAX_RX_PKTLEN 14000U
+
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
uint16_t vport_to_vfid(union VPORT v)
@@ -25,6 +28,58 @@ uint16_t vport_to_vfid(union VPORT v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static uint32_t zxdh_dev_speed_capa_get(uint32_t speed)
+{
+ switch (speed) {
+ case RTE_ETH_SPEED_NUM_10G: return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G: return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G: return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G: return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G: return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G: return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G;
+ default: return 0;
+ }
+}
+
+static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ dev_info->speed_capa = zxdh_dev_speed_capa_get(hw->speed);
+ dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX);
+ dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX);
+ dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE;
+ dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN;
+ dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS;
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER);
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM);
+
+ return 0;
+}
+
static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
@@ -321,6 +376,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+/* dev_ops for zxdh, bare necessities for basic operation */
+static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_infos_get = zxdh_dev_infos_get,
+};
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -377,7 +437,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
struct zxdh_hw *hw = eth_dev->data->dev_private;
int ret = 0;
- eth_dev->dev_ops = NULL;
+ eth_dev->dev_ops = &zxdh_eth_dev_ops;
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 8397 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
` (5 preceding siblings ...)
2024-10-15 5:44 ` [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
@ 2024-10-15 5:44 ` Junlong Wang
2024-10-15 15:37 ` Stephen Hemminger
2024-10-15 15:57 ` Stephen Hemminger
6 siblings, 2 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 39748 bytes --]
provided zxdh dev configure ops for queue
check,reset,alloc resources,etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 11 +-
drivers/net/zxdh/zxdh_common.c | 119 +++++++++
drivers/net/zxdh/zxdh_common.h | 12 +
drivers/net/zxdh/zxdh_ethdev.c | 459 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 23 +-
drivers/net/zxdh/zxdh_pci.c | 97 +++++++
drivers/net/zxdh/zxdh_pci.h | 28 ++
drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++
drivers/net/zxdh/zxdh_queue.h | 171 ++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 2 +-
10 files changed, 1045 insertions(+), 8 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_queue.c
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 9aec47e68f..cde96d8111 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -14,8 +14,9 @@ if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
endif
sources = files(
- 'zxdh_ethdev.c',
- 'zxdh_pci.c',
- 'zxdh_msg.c',
- 'zxdh_common.c',
- )
+ 'zxdh_ethdev.c',
+ 'zxdh_pci.c',
+ 'zxdh_msg.c',
+ 'zxdh_common.c',
+ 'zxdh_queue.c',
+ )
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
index 140d0f2322..9f23fcb658 100644
--- a/drivers/net/zxdh/zxdh_common.c
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -20,6 +20,7 @@
#define ZXDH_COMMON_TABLE_WRITE 1
#define ZXDH_COMMON_FIELD_PHYPORT 6
+#define ZXDH_COMMON_FIELD_DATACH 3
#define RSC_TBL_CONTENT_LEN_MAX (257 * 2)
@@ -247,3 +248,121 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
return ret;
}
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ uint32_t val = *((volatile uint32_t *)(baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ *((volatile uint32_t *)(baseaddr + reg)) = val;
+}
+
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ /* check whether lock is used */
+ if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
+ return -1;
+
+ return 0;
+}
+
+int32_t zxdh_release_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
+ var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
+ zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
+ return 0;
+ }
+
+ return -1;
+}
+
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg)
+{
+ uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val)
+{
+ *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val;
+}
+
+static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_pci_bar_msg desc;
+ struct zxdh_msg_recviver_mem msg_rsp;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+ if ((buff_size != 0) && (buff == NULL)) {
+ PMD_DRV_LOG(ERR, "Buff is invalid");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE,
+ field, buff, buff_size);
+
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_datach_set(struct rte_eth_dev *dev)
+{
+ /* payload: queue_num(2byte) + pch1(2byte) + ** + pchn */
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t buff_size = (hw->queue_num + 1) * 2;
+ void *buff = rte_zmalloc(NULL, buff_size, 0);
+
+ if (unlikely(buff == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate buff");
+ return -ENOMEM;
+ }
+ memset(buff, 0, buff_size);
+ uint16_t *pdata = (uint16_t *)buff;
+ *pdata++ = hw->queue_num;
+ uint16_t i;
+
+ for (i = 0; i < hw->queue_num; i++)
+ *(pdata + i) = hw->channel_context[i].ph_chno;
+
+ int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH,
+ (void *)buff, buff_size);
+
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to setup data channel of common table");
+
+ rte_free(buff);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
index ec7011e820..5205d9db54 100644
--- a/drivers/net/zxdh/zxdh_common.h
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -14,6 +14,10 @@ extern "C" {
#include "zxdh_ethdev.h"
+#define ZXDH_VF_LOCK_REG 0x90
+#define ZXDH_VF_LOCK_ENABLE_MASK 0x1
+#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10
+
struct zxdh_res_para {
uint64_t virt_addr;
uint16_t pcie_id;
@@ -23,6 +27,14 @@ struct zxdh_res_para {
int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
+int32_t zxdh_release_lock(struct zxdh_hw *hw);
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw);
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg);
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val);
+int32_t zxdh_datach_set(struct rte_eth_dev *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index e0f2c1985b..7143dea7ae 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -11,6 +11,7 @@
#include "zxdh_pci.h"
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#include "zxdh_queue.h"
#define ZXDH_MIN_RX_BUFSIZE 64
#define ZXDH_MAX_RX_PKTLEN 14000U
@@ -376,8 +377,466 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+static int32_t zxdh_features_update(struct zxdh_hw *hw,
+ const struct rte_eth_rxmode *rxmode,
+ const struct rte_eth_txmode *txmode)
+{
+ uint64_t rx_offloads = rxmode->offloads;
+ uint64_t tx_offloads = txmode->offloads;
+ uint64_t req_features = hw->guest_features;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM);
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) |
+ (1ULL << ZXDH_NET_F_GUEST_TSO6);
+
+ if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_CSUM);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) |
+ (1ULL << ZXDH_NET_F_HOST_TSO6);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_UFO);
+
+ req_features = req_features & hw->host_features;
+ hw->guest_features = req_features;
+
+ VTPCI_OPS(hw)->set_features(hw, req_features);
+
+ if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) &&
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) {
+ PMD_DRV_LOG(ERR, "rx checksum not available on this host");
+ return -ENOTSUP;
+ }
+
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) {
+ PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+static bool rx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6);
+}
+
+static bool tx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO);
+}
+
+static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t i = 0;
+
+ const char *type = NULL;
+ struct virtqueue *vq = NULL;
+ struct rte_mbuf *buf = NULL;
+ int32_t queue_type = 0;
+
+ if (hw->vqs == NULL)
+ return;
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (!vq)
+ continue;
+
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == VTNET_RQ)
+ type = "rxq";
+ else if (queue_type == VTNET_TQ)
+ type = "txq";
+ else
+ continue;
+ PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
+
+ while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL)
+ rte_pktmbuf_free(buf);
+ }
+}
+
+static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t base = (queue_type == VTNET_RQ) ? 0 : 1;
+ uint16_t i = 0;
+ uint16_t j = 0;
+ uint16_t done = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ rte_delay_us_block(1000);
+ /* acquire hw lock */
+ if (zxdh_acquire_lock(hw) < 0) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout);
+ continue;
+ }
+ /* Iterate COI table and find free channel */
+ for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) {
+ uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t));
+ uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+
+ for (j = base; j < 32; j += 2) {
+ /* Got the available channel & update COI table */
+ if ((var & (1 << j)) == 0) {
+ var |= (1 << j);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+ done = 1;
+ break;
+ }
+ }
+ if (done)
+ break;
+ }
+ break;
+ }
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ zxdh_release_lock(hw);
+ /* check for no channel condition */
+ if (done != 1) {
+ PMD_INIT_LOG(ERR, "NO availd queues\n");
+ return -1;
+ }
+ /* reruen available channel ID */
+ return (i * 32) + j;
+}
+
+static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (hw->channel_context[lch].valid == 1) {
+ PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u",
+ lch, hw->channel_context[lch].ph_chno);
+ return hw->channel_context[lch].ph_chno;
+ }
+ int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch));
+
+ if (pch < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ hw->channel_context[lch].ph_chno = (uint16_t)pch;
+ hw->channel_context[lch].valid = 1;
+ PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch);
+ return 0;
+}
+
+static void zxdh_init_vring(struct virtqueue *vq)
+{
+ int32_t size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ virtqueue_disable_intr(vq);
+}
+
+static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx)
+{
+ char vq_name[VIRTQUEUE_MAX_NAME_SZ] = {0};
+ char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ] = {0};
+ const struct rte_memzone *mz = NULL;
+ const struct rte_memzone *hdr_mz = NULL;
+ uint32_t size = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct virtnet_rx *rxvq = NULL;
+ struct virtnet_tx *txvq = NULL;
+ struct virtqueue *vq = NULL;
+ size_t sz_hdr_mz = 0;
+ void *sw_ring = NULL;
+ int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx);
+ int32_t numa_node = dev->device->numa_node;
+ uint16_t vtpci_phy_qidx = 0;
+ uint32_t vq_size = 0;
+ int32_t ret = 0;
+
+ if (hw->channel_context[vtpci_logic_qidx].valid == 0) {
+ PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx);
+ return -EINVAL;
+ }
+ vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno;
+
+ PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d",
+ vtpci_logic_qidx, vtpci_phy_qidx, numa_node);
+
+ vq_size = ZXDH_QUEUE_DEPTH;
+
+ if (VTPCI_OPS(hw)->set_queue_num != NULL)
+ VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size);
+
+ snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx);
+
+ size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra),
+ RTE_CACHE_LINE_SIZE);
+ if (queue_type == VTNET_TQ) {
+ /*
+ * For each xmit packet, allocate a zxdh_net_hdr
+ * and indirect ring elements
+ */
+ sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region);
+ }
+
+ vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return -ENOMEM;
+ }
+ hw->vqs[vtpci_logic_qidx] = vq;
+
+ vq->hw = hw;
+ vq->vq_queue_index = vtpci_phy_qidx;
+ vq->vq_nentries = vq_size;
+
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ if (queue_type == VTNET_RQ)
+ vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ ZXDH_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(vq_name);
+ if (mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+
+ memset(mz->addr, 0, mz->len);
+
+ vq->vq_ring_mem = mz->iova;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ zxdh_init_vring(vq);
+
+ if (sz_hdr_mz) {
+ snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+ dev->data->port_id, vtpci_phy_qidx);
+ hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ RTE_CACHE_LINE_SIZE);
+ if (hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ hdr_mz = rte_memzone_lookup(vq_hdr_name);
+ if (hdr_mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+ }
+
+ if (queue_type == VTNET_RQ) {
+ size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+
+ vq->sw_ring = sw_ring;
+ rxvq = &vq->rxq;
+ rxvq->vq = vq;
+ rxvq->port_id = dev->data->port_id;
+ rxvq->mz = mz;
+ } else { /* queue_type == VTNET_TQ */
+ txvq = &vq->txq;
+ txvq->vq = vq;
+ txvq->port_id = dev->data->port_id;
+ txvq->mz = mz;
+ txvq->virtio_net_hdr_mz = hdr_mz;
+ txvq->virtio_net_hdr_mem = hdr_mz->iova;
+ }
+
+ vq->offset = offsetof(struct rte_mbuf, buf_iova);
+ if (queue_type == VTNET_TQ) {
+ struct zxdh_tx_region *txr = hdr_mz->addr;
+ uint32_t i;
+
+ memset(txr, 0, vq_size * sizeof(*txr));
+ for (i = 0; i < vq_size; i++) {
+ /* first indirect descriptor is always the tx header */
+ struct vring_packed_desc *start_dp = txr[i].tx_packed_indir;
+
+ vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir));
+ start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) +
+ offsetof(struct zxdh_tx_region, tx_hdr);
+ /* length will be updated to actual pi hdr size when xmit pkt */
+ start_dp->len = 0;
+ }
+ }
+ if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ return -EINVAL;
+ }
+ return 0;
+fail_q_alloc:
+ rte_free(sw_ring);
+ rte_memzone_free(hdr_mz);
+ rte_memzone_free(mz);
+ rte_free(vq);
+ return ret;
+}
+
+static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq)
+{
+ uint16_t lch;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vqs");
+ return -ENOMEM;
+ }
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (zxdh_acquire_channel(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire the channels");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ if (zxdh_init_queue(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to alloc virtio queue");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+static int32_t zxdh_dev_configure(struct rte_eth_dev *dev)
+{
+ const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint32_t nr_vq = 0;
+ int32_t ret = 0;
+
+ if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+ if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues,
+ ZXDH_QUEUES_NUM_MAX);
+ return -EINVAL;
+ }
+ if ((rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) && (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE)) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+ if ((rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) && (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE)) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+
+ ret = zxdh_features_update(hw, rxmode, txmode);
+ if (ret < 0)
+ return ret;
+
+ /* check if lsc interrupt feature is enabled */
+ if (dev->data->dev_conf.intr_conf.lsc) {
+ if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+ PMD_DRV_LOG(ERR, "link status not supported by host");
+ return -ENOTSUP;
+ }
+ }
+
+ hw->has_tx_offload = tx_offload_enabled(hw);
+ hw->has_rx_offload = rx_offload_enabled(hw);
+
+ nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues;
+ if (nr_vq == hw->queue_num) {
+ /*no que changed */
+ return 0;
+ }
+
+ PMD_DRV_LOG(DEBUG, "que changed need reset ");
+ /* Reset the device although not necessary at startup */
+ zxdh_vtpci_reset(hw);
+
+ /* Tell the host we've noticed this device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK);
+
+ /* Tell the host we've known how to drive the device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER);
+ /* The queue needs to be released when reconfiguring*/
+ if (hw->vqs != NULL) {
+ zxdh_dev_free_mbufs(dev);
+ zxdh_free_queues(dev);
+ }
+
+ hw->queue_num = nr_vq;
+ ret = zxdh_alloc_queues(dev, nr_vq);
+ if (ret < 0)
+ return ret;
+
+ zxdh_datach_set(dev);
+
+ if (zxdh_configure_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to configure interrupt");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+
+ zxdh_vtpci_reinit_complete(hw);
+
+ return ret;
+}
+
/* dev_ops for zxdh, bare necessities for basic operation */
static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_configure = zxdh_dev_configure,
.dev_infos_get = zxdh_dev_infos_get,
};
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 7c5f5940cb..f3fc5b3ce0 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -14,8 +14,6 @@ extern "C" {
#include <rte_interrupts.h>
#include <eal_interrupts.h>
-#include "zxdh_queue.h"
-
/* ZXDH PCI vendor/device ID. */
#define PCI_VENDOR_ID_ZTE 0x1cf2
@@ -28,11 +26,23 @@ extern "C" {
#define ZXDH_MAX_MC_MAC_ADDRS 32
#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+/**
+ * zxdh has a total of 4096 queues,
+ * pf/vf devices support up to 256 queues
+ * (include private queues)
+ */
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+#define ZXDH_QUEUE_DEPTH 1024
+#define ZXDH_QUEUES_BASE 0
+#define ZXDH_TOTAL_QUEUES_NUM 4096
+#define ZXDH_QUEUES_NUM_MAX 256
+#define ZXDH_QUERES_SHARE_BASE (0x5000)
#define ZXDH_NUM_BARS 2
+#define ZXDH_MBUF_BURST_SZ 64
+
union VPORT {
uint16_t vport;
struct {
@@ -44,6 +54,11 @@ union VPORT {
};
};
+struct chnl_context {
+ uint16_t valid;
+ uint16_t ph_chno;
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
@@ -51,6 +66,7 @@ struct zxdh_hw {
struct rte_intr_handle *risc_intr;
struct rte_intr_handle *dtb_intr;
struct virtqueue **vqs;
+ struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX];
union VPORT vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -64,6 +80,7 @@ struct zxdh_hw {
uint16_t device_id;
uint16_t port_id;
uint16_t vfid;
+ uint16_t queue_num;
uint8_t *isr;
uint8_t weak_barriers;
@@ -75,6 +92,8 @@ struct zxdh_hw {
uint8_t phyport;
uint8_t panel_id;
uint8_t msg_chan_init;
+ uint8_t has_tx_offload;
+ uint8_t has_rx_offload;
};
uint16_t vport_to_vfid(union VPORT v);
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index 1b953c7d0a..a109faa599 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -114,6 +114,86 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
return rte_read8(hw->isr);
}
+static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ rte_write16(vq_size, &hw->common_cfg->queue_size);
+}
+
+static int32_t check_vq_phys_addr_ok(struct virtqueue *vq)
+{
+ if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) {
+ PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+ return 0;
+ }
+ return 1;
+}
+
+static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+ rte_write32(val & ((1ULL << 32) - 1), lo);
+ rte_write32(val >> 32, hi);
+}
+
+static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ uint64_t desc_addr = 0;
+ uint64_t avail_addr = 0;
+ uint64_t used_addr = 0;
+ uint16_t notify_off = 0;
+
+ if (!check_vq_phys_addr_ok(vq))
+ return -1;
+
+ desc_addr = vq->vq_ring_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ if (vtpci_packed_queue(vq->hw)) {
+ used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)),
+ ZXDH_PCI_VRING_ALIGN);
+ } else {
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN);
+ }
+
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */
+ notify_off = 0;
+ vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+ notify_off * hw->notify_off_multiplier);
+
+ rte_write16(1, &hw->common_cfg->queue_enable);
+
+ return 0;
+}
+
+static void zxdh_del_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -124,6 +204,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_queue_irq = zxdh_set_queue_irq,
.set_config_irq = zxdh_set_config_irq,
.get_isr = zxdh_get_isr,
+ .get_queue_num = zxdh_get_queue_num,
+ .set_queue_num = zxdh_set_queue_num,
+ .setup_queue = zxdh_setup_queue,
+ .del_queue = zxdh_del_queue,
};
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
@@ -150,6 +234,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw)
PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
}
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw)
+{
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK);
+}
+
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ if (status != ZXDH_CONFIG_STATUS_RESET)
+ status |= VTPCI_OPS(hw)->get_status(hw);
+
+ VTPCI_OPS(hw)->set_status(hw, status);
+}
+
static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
{
uint8_t bar = cap->bar;
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index 677dadd5c8..f8949ec041 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -12,7 +12,9 @@ extern "C" {
#include <stdint.h>
#include <stdbool.h>
+#include <rte_pci.h>
#include <bus_pci_driver.h>
+#include <ethdev_driver.h>
#include "zxdh_ethdev.h"
@@ -34,8 +36,20 @@ enum zxdh_msix_status {
#define PCI_CAP_ID_MSIX 0x11
#define PCI_MSIX_ENABLE 0x8000
+#define ZXDH_PCI_VRING_ALIGN 4096
+#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
+#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
+#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */
#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
+#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
+#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */
+
+#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */
+#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */
+#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */
#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
@@ -60,6 +74,8 @@ enum zxdh_msix_status {
#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define ZXDH_CONFIG_STATUS_FAILED 0x80
+#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12
+
struct zxdh_net_config {
/* The config defining mac address (if ZXDH_NET_F_MAC) */
uint8_t mac[RTE_ETHER_ADDR_LEN];
@@ -122,6 +138,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_F_RING_PACKED);
+}
+
struct zxdh_pci_ops {
void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
@@ -134,6 +155,11 @@ struct zxdh_pci_ops {
uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
uint8_t (*get_isr)(struct zxdh_hw *hw);
+ uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id);
+ void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size);
+
+ int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
+ void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
};
struct zxdh_hw_internal {
@@ -155,6 +181,8 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw);
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status);
#ifdef __cplusplus
}
diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c
new file mode 100644
index 0000000000..005b2a5578
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.c
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "zxdh_queue.h"
+#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+#include "zxdh_common.h"
+#include "zxdh_msg.h"
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq)
+{
+ struct rte_mbuf *cookie = NULL;
+ int32_t idx = 0;
+
+ if (vq == NULL)
+ return NULL;
+
+ for (idx = 0; idx < vq->vq_nentries; idx++) {
+ cookie = vq->vq_descx[idx].cookie;
+ if (cookie != NULL) {
+ vq->vq_descx[idx].cookie = NULL;
+ return cookie;
+ }
+ }
+ return NULL;
+}
+
+static int32_t zxdh_release_channel(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t var = 0;
+ uint32_t addr = 0;
+ uint32_t widx = 0;
+ uint32_t bidx = 0;
+ uint16_t pch = 0;
+ uint16_t lch = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ if (zxdh_acquire_lock(hw) != 0) {
+ PMD_INIT_LOG(ERR,
+ "Could not acquire lock to release channel, timeout %d", timeout);
+ continue;
+ }
+ break;
+ }
+
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Acquire lock timeout");
+ return -1;
+ }
+
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (hw->channel_context[lch].valid == 0) {
+ PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
+ continue;
+ }
+
+ pch = hw->channel_context[lch].ph_chno;
+ widx = pch / 32;
+ bidx = pch % 32;
+
+ addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
+ var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+ var &= ~(1 << bidx);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+
+ hw->channel_context[lch].valid = 0;
+ hw->channel_context[lch].ph_chno = 0;
+ }
+
+ zxdh_release_lock(hw);
+
+ return 0;
+}
+
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx)
+{
+ if (vtpci_queue_idx % 2 == 0)
+ return VTNET_RQ;
+ else
+ return VTNET_TQ;
+}
+
+int32_t zxdh_free_queues(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ struct virtqueue *vq = NULL;
+ int32_t queue_type = 0;
+ uint16_t i = 0;
+
+ if (hw->vqs == NULL)
+ return 0;
+
+ if (zxdh_release_channel(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to clear coi table");
+ return -1;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (vq == NULL)
+ continue;
+
+ VTPCI_OPS(hw)->del_queue(hw, vq);
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == VTNET_RQ) {
+ rte_free(vq->sw_ring);
+ rte_memzone_free(vq->rxq.mz);
+ } else if (queue_type == VTNET_TQ) {
+ rte_memzone_free(vq->txq.mz);
+ rte_memzone_free(vq->txq.virtio_net_hdr_mz);
+ }
+
+ rte_free(vq);
+ hw->vqs[i] = NULL;
+ PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
+ }
+
+ rte_free(hw->vqs);
+ hw->vqs = NULL;
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index e511843205..8bf2993a48 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -15,7 +15,25 @@ extern "C" {
#include "zxdh_ethdev.h"
#include "zxdh_rxtx.h"
+#include "zxdh_pci.h"
+enum { VTNET_RQ = 0, VTNET_TQ = 1 };
+
+#define VIRTQUEUE_MAX_NAME_SZ 32
+#define RQ_QUEUE_IDX 0
+#define TQ_QUEUE_IDX 1
+#define ZXDH_MAX_TX_INDIRECT 8
+
+/* This marks a buffer as write-only (otherwise read-only). */
+#define VRING_DESC_F_WRITE 2
+/* This flag means the descriptor was made available by the driver */
+#define VRING_PACKED_DESC_F_AVAIL (1 << (7))
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+
+#define VQ_RING_DESC_CHAIN_END 32768
/** ring descriptors: 16 bytes.
* These can chain together via "next".
**/
@@ -26,6 +44,19 @@ struct vring_desc {
uint16_t next; /* We chain unused descriptors via this. */
};
+struct vring_used_elem {
+ /* Index of start of used descriptor chain. */
+ uint32_t id;
+ /* Total length of the descriptor chain which was written to. */
+ uint32_t len;
+};
+
+struct vring_used {
+ uint16_t flags;
+ uint16_t idx;
+ struct vring_used_elem ring[0];
+};
+
struct vring_avail {
uint16_t flags;
uint16_t idx;
@@ -102,6 +133,146 @@ struct virtqueue {
struct vq_desc_extra vq_descx[0];
};
+struct zxdh_type_hdr {
+ uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */
+ uint8_t pd_len;
+ uint8_t num_buffers;
+ uint8_t reserved;
+} __rte_packed; /* 4B */
+
+struct zxdh_pi_hdr {
+ uint8_t pi_len;
+ uint8_t pkt_type;
+ uint16_t vlan_id;
+ uint32_t ipv6_extend;
+ uint16_t l3_offset;
+ uint16_t l4_offset;
+ uint8_t phy_port;
+ uint8_t pkt_flag_hi8;
+ uint16_t pkt_flag_lw16;
+ union {
+ struct {
+ uint64_t sa_idx;
+ uint8_t reserved_8[8];
+ } dl;
+ struct {
+ uint32_t lro_flag;
+ uint32_t lro_mss;
+ uint16_t err_code;
+ uint16_t pm_id;
+ uint16_t pkt_len;
+ uint8_t reserved[2];
+ } ul;
+ };
+} __rte_packed; /* 32B */
+
+struct zxdh_pd_hdr_dl {
+ uint32_t ol_flag;
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t dst_vfid;
+ uint32_t svlan_insert;
+ uint32_t cvlan_insert;
+} __rte_packed; /* 16B */
+
+struct zxdh_net_hdr_dl {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_dl pd_hdr; /* 16B */
+} __rte_packed;
+
+struct zxdh_pd_hdr_ul {
+ uint32_t pkt_flag;
+ uint32_t rss_hash;
+ uint32_t fd;
+ uint32_t striped_vlan_tci;
+ /* ovs */
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t src_vfid;
+ /* */
+ uint16_t pkt_type_out;
+ uint16_t pkt_type_in;
+} __rte_packed; /* 24B */
+
+struct zxdh_net_hdr_ul {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_ul pd_hdr; /* 24B */
+} __rte_packed; /* 60B */
+
+struct zxdh_tx_region {
+ struct zxdh_net_hdr_dl tx_hdr;
+ union {
+ struct vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT];
+ struct vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT];
+ } __rte_aligned(16);
+};
+
+static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align)
+{
+ size_t size;
+
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
+ size = num * sizeof(struct vring_desc);
+ size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_used) + (num * sizeof(struct vring_used_elem));
+ return size;
+}
+
+static inline void vring_init_packed(struct vring_packed *vr, uint8_t *p,
+ unsigned long align, uint32_t num)
+{
+ vr->num = num;
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->driver = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device = (struct vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct vring_packed_desc_event)), align);
+}
+
+static inline void vring_desc_init_packed(struct virtqueue *vq, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
+static inline void vring_desc_init_indirect_packed(struct vring_packed_desc *dp, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n; i++) {
+ dp[i].id = (uint16_t)i;
+ dp[i].flags = VRING_DESC_F_WRITE;
+ }
+}
+
+static inline void virtqueue_disable_intr(struct virtqueue *vq)
+{
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow;
+ }
+}
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq);
+int32_t zxdh_free_queues(struct rte_eth_dev *dev);
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
index 6476bc15e2..7e9f3a3ae6 100644
--- a/drivers/net/zxdh/zxdh_rxtx.h
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2023 ZTE Corporation
+ * Copyright(c) 2024 ZTE Corporation
*/
#ifndef _ZXDH_RXTX_H_
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 93265 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops
2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
@ 2024-10-15 15:37 ` Stephen Hemminger
2024-10-15 15:57 ` Stephen Hemminger
1 sibling, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-15 15:37 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, ferruh.yigit, wang.yong19
On Tue, 15 Oct 2024 13:44:35 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
> index 9aec47e68f..cde96d8111 100644
> --- a/drivers/net/zxdh/meson.build
> +++ b/drivers/net/zxdh/meson.build
> @@ -14,8 +14,9 @@ if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
> endif
>
> sources = files(
> - 'zxdh_ethdev.c',
> - 'zxdh_pci.c',
> - 'zxdh_msg.c',
> - 'zxdh_common.c',
> - )
> + 'zxdh_ethdev.c',
> + 'zxdh_pci.c',
> + 'zxdh_msg.c',
> + 'zxdh_common.c',
> + 'zxdh_queue.c',
> + )
If you make a new version, then fix indentation in earlier patch to avoid
having to fix it in last patch.
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops
2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-15 15:37 ` Stephen Hemminger
@ 2024-10-15 15:57 ` Stephen Hemminger
1 sibling, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-15 15:57 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, ferruh.yigit, wang.yong19
On Tue, 15 Oct 2024 13:44:35 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> provided zxdh dev configure ops for queue
> check,reset,alloc resources,etc.
>
> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
These build failures need to be addressed.
Probably as simple as adding
struct bar_msg_header msg_header = { 0 };
####################################################################################
#### [Begin job log] "ubuntu-22.04-gcc-stdatomic" at step Build and test
####################################################################################
951 | struct bar_msg_header msg_header;
| ^~~~~~~~~~
../drivers/net/zxdh/zxdh_msg.c:976:60: error: ‘msg_header.sync’ may be used uninitialized [-Werror=maybe-uninitialized]
976 | uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr);
| ~~~~~~~~~~^~~~~
../drivers/net/zxdh/zxdh_msg.c:951:31: note: ‘msg_header’ declared here
951 | struct bar_msg_header msg_header;
| ^~~~~~~~~~
In function ‘zxdh_bar_msg_ack_async_msg_proc’,
inlined from ‘zxdh_bar_irq_recv’ at ../drivers/net/zxdh/zxdh_msg.c:984:3:
../drivers/net/zxdh/zxdh_msg.c:860:78: error: ‘msg_header.msg_id’ may be used uninitialized [-Werror=maybe-uninitialized]
860 | struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
| ~~~~~~~~~~^~~~~~~~
../drivers/net/zxdh/zxdh_msg.c: In function ‘zxdh_bar_irq_recv’:
../drivers/net/zxdh/zxdh_msg.c:951:31: note: ‘msg_header’ declared here
951 | struct bar_msg_header msg_header;
| ^~~~~~~~~~
cc1: all warnings being treated as errors
[2198/3046] Compiling C object 'drivers/a715181@@tmp_rte_net_zxdh at sta/net_zxdh_zxdh_queue.c.o'.
[2199/3046] Compiling C object 'drivers/a715181@@tmp_rte_raw_cnxk_bphy at sta/raw_cnxk_bphy_cnxk_bphy_cgx.c.o'.
[2200/3046] Generating rte_net_vmxnet3.pmd.c with a custom command.
[2201/3046] Compiling C object 'drivers/a715181@@tmp_rte_raw_cnxk_bphy at sta/raw_cnxk_bphy_cnxk_bphy.c.o'.
[2202/3046] Compiling C object 'drivers/a715181@@tmp_rte_net_zxdh at sta/net_zxdh_zxdh_common.c.o'.
ninja: build stopped: subcommand failed.
##[error]Process completed with exit code 1.
####################################################################################
#### [End job log] "ubuntu-22.04-gcc-stdatomic" at step Build and test
####################################################################################
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 0/9] net/zxdh: introduce net zxdh driver
2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-10-16 8:16 ` Junlong Wang
2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
1 sibling, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:16 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 2584 bytes --]
v6:
- Resolve ci/intel compilation issues.
- fix meson.build indentation in earlier patch.
V5:
- split driver into multiple patches,part of the zxdh driver,
later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc.
- fix errors reported by scripts.
- move the product link in zxdh.rst.
- fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64.
- modify other comments according to Ferruh's comments.
V4:
- Resolve compilation issues
Junlong Wang (9):
net/zxdh: add zxdh ethdev pmd driver
net/zxdh: add logging implementation
net/zxdh: add zxdh device pci init implementation
net/zxdh: add msg chan and msg hwlock init
net/zxdh: add msg chan enable implementation
net/zxdh: add zxdh get device backend infos
net/zxdh: add configure zxdh intr implementation
net/zxdh: add zxdh dev infos get ops
net/zxdh: add zxdh dev configure ops
doc/guides/nics/features/zxdh.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 30 +
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 22 +
drivers/net/zxdh/zxdh_common.c | 368 +++++++++++
drivers/net/zxdh/zxdh_common.h | 42 ++
drivers/net/zxdh/zxdh_ethdev.c | 1020 +++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 100 +++
drivers/net/zxdh/zxdh_logs.h | 40 ++
drivers/net/zxdh/zxdh_msg.c | 986 ++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 228 +++++++
drivers/net/zxdh/zxdh_pci.c | 449 +++++++++++++
drivers/net/zxdh/zxdh_pci.h | 194 ++++++
drivers/net/zxdh/zxdh_queue.c | 131 ++++
drivers/net/zxdh/zxdh_queue.h | 281 ++++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 ++
17 files changed, 3957 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
create mode 100644 drivers/net/zxdh/zxdh_logs.h
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.c
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 4765 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver
2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
@ 2024-10-16 8:16 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
` (2 more replies)
0 siblings, 3 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:16 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 7515 bytes --]
Add basic zxdh ethdev init and register PCI probe functions
Update doc files
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
doc/guides/nics/features/zxdh.ini | 9 +++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 30 ++++++++++
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 18 ++++++
drivers/net/zxdh/zxdh_ethdev.c | 92 +++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 44 +++++++++++++++
7 files changed, 195 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
new file mode 100644
index 0000000000..05c8091ed7
--- /dev/null
+++ b/doc/guides/nics/features/zxdh.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'zxdh' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..8e371ac4a5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -69,3 +69,4 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ zxdh
diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
new file mode 100644
index 0000000000..9832115e11
--- /dev/null
+++ b/doc/guides/nics/zxdh.rst
@@ -0,0 +1,30 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 ZTE Corporation.
+
+ZXDH Poll Mode Driver
+======================
+
+The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
+for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
+the ZTE Ethernet Controller E310/E312.
+
+- Learning about ZXDH NX Series Ethernet Controller NICs using
+ `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
+
+Features
+--------
+
+Features of the zxdh PMD are:
+
+- Multi arch support: x86_64, ARMv8.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+X86-32, Power8, ARMv7 and BSD are not supported yet.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b782..0a12914534 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -62,6 +62,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
new file mode 100644
index 0000000000..932fb1c835
--- /dev/null
+++ b/drivers/net/zxdh/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 ZTE Corporation
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+sources = files(
+ 'zxdh_ethdev.c',
+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
new file mode 100644
index 0000000000..75d8b28cc3
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <ethdev_pci.h>
+#include <bus_pci_driver.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ int ret = 0;
+
+ eth_dev->dev_ops = NULL;
+
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL)
+ return -ENOMEM;
+
+ memset(hw, 0, sizeof(*hw));
+ hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
+ if (hw->bar_addr[0] == 0)
+ return -EIO;
+
+ hw->device_id = pci_dev->id.device_id;
+ hw->port_id = eth_dev->data->port_id;
+ hw->eth_dev = eth_dev;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ hw->is_pf = 0;
+
+ if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID ||
+ pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) {
+ hw->is_pf = 1;
+ }
+
+ return ret;
+}
+
+static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct zxdh_hw),
+ zxdh_eth_dev_init);
+}
+
+static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+ int ret = 0;
+
+ return ret;
+}
+
+static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ int ret = 0;
+
+ ret = zxdh_dev_close(eth_dev);
+
+ return ret;
+}
+
+static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
+
+ return ret;
+}
+
+static const struct rte_pci_id pci_id_zxdh_map[] = {
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+static struct rte_pci_driver zxdh_pmd = {
+ .id_table = pci_id_zxdh_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = zxdh_eth_pci_probe,
+ .remove = zxdh_eth_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
new file mode 100644
index 0000000000..04023bfe84
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_ETHDEV_H_
+#define _ZXDH_ETHDEV_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "ethdev_driver.h"
+
+/* ZXDH PCI vendor/device ID. */
+#define PCI_VENDOR_ID_ZTE 0x1cf2
+
+#define ZXDH_E310_PF_DEVICEID 0x8061
+#define ZXDH_E310_VF_DEVICEID 0x8062
+#define ZXDH_E312_PF_DEVICEID 0x8049
+#define ZXDH_E312_VF_DEVICEID 0x8060
+
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+#define ZXDH_NUM_BARS 2
+
+struct zxdh_hw {
+ struct rte_eth_dev *eth_dev;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+
+ uint32_t speed;
+ uint16_t device_id;
+ uint16_t port_id;
+
+ uint8_t duplex;
+ uint8_t is_pf;
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_ETHDEV_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 14136 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 2/9] net/zxdh: add logging implementation
2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
` (6 more replies)
2024-10-21 9:03 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Thomas Monjalon
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2 siblings, 7 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3423 bytes --]
Add zxdh logging implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++--
drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++
2 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_logs.h
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 75d8b28cc3..7220770c01 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -7,6 +7,7 @@
#include <rte_ethdev.h>
#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL)
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
return -ENOMEM;
+ }
memset(hw, 0, sizeof(*hw));
hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
- if (hw->bar_addr[0] == 0)
+ if (hw->bar_addr[0] == 0) {
+ PMD_INIT_LOG(ERR, "Bad mem resource.");
return -EIO;
+ }
hw->device_id = pci_dev->id.device_id;
hw->port_id = eth_dev->data->port_id;
@@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = {
RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE);
diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h
new file mode 100644
index 0000000000..52dbd8228b
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_logs.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_LOGS_H_
+#define _ZXDH_LOGS_H_
+
+#include <rte_log.h>
+
+extern int zxdh_logtype_init;
+#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init
+#define PMD_INIT_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_driver;
+#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_rx;
+#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx
+#define PMD_RX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_tx;
+#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx
+#define PMD_TX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_msg;
+#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg
+#define PMD_MSG_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+#endif /* _ZXDH_LOGS_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6152 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
` (5 subsequent siblings)
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 22835 bytes --]
Add device pci init implementation,
to obtain PCI capability and read configuration, etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 43 +++++
drivers/net/zxdh/zxdh_ethdev.h | 20 ++-
drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++
drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++
7 files changed, 659 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 932fb1c835..7db4e7bc71 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -15,4 +15,5 @@ endif
sources = files(
'zxdh_ethdev.c',
+ 'zxdh_pci.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 7220770c01..f34b2af7b3 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -8,6 +8,40 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+
+struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+
+static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
+{
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = 0;
+
+ ret = zxdh_read_pci_caps(pci_dev, hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id);
+ goto err;
+ }
+
+ zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_dev_pci_ops;
+ zxdh_vtpci_reset(hw);
+ zxdh_get_pci_dev_config(hw);
+
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]);
+
+ /* If host does not support both status and MSI-X then disable LSC */
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE)
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+ else
+ eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+
+ return 0;
+
+err:
+ PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
+ return ret;
+}
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
hw->is_pf = 1;
}
+ ret = zxdh_init_device(eth_dev);
+ if (ret < 0)
+ goto err_zxdh_init;
+
+ return ret;
+
+err_zxdh_init:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
return ret;
}
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 04023bfe84..18d9916713 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -9,6 +9,7 @@
extern "C" {
#endif
+#include <rte_ether.h>
#include "ethdev_driver.h"
/* ZXDH PCI vendor/device ID. */
@@ -23,16 +24,31 @@ extern "C" {
#define ZXDH_MAX_MC_MAC_ADDRS 32
#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
+
#define ZXDH_NUM_BARS 2
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
- uint64_t bar_addr[ZXDH_NUM_BARS];
+ struct zxdh_pci_common_cfg *common_cfg;
+ struct zxdh_net_config *dev_cfg;
- uint32_t speed;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+ uint64_t host_features;
+ uint64_t guest_features;
+ uint32_t max_queue_pairs;
+ uint32_t speed;
+ uint32_t notify_off_multiplier;
+ uint16_t *notify_base;
+ uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint8_t *isr;
+ uint8_t weak_barriers;
+ uint8_t use_msix;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
};
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
new file mode 100644
index 0000000000..e23dbcbef5
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -0,0 +1,290 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <inttypes.h>
+#include <unistd.h>
+
+#ifdef RTE_EXEC_ENV_LINUX
+ #include <dirent.h>
+ #include <fcntl.h>
+#endif
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+
+#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA | \
+ 1ULL << ZXDH_NET_F_MAC)
+
+static void zxdh_read_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ void *dst,
+ int32_t length)
+{
+ int32_t i = 0;
+ uint8_t *p = NULL;
+ uint8_t old_gen = 0;
+ uint8_t new_gen = 0;
+
+ do {
+ old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+ p = dst;
+ for (i = 0; i < length; i++)
+ *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+ new_gen = rte_read8(&hw->common_cfg->config_generation);
+ } while (old_gen != new_gen);
+}
+
+static void zxdh_write_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ const void *src,
+ int32_t length)
+{
+ int32_t i = 0;
+ const uint8_t *p = src;
+
+ for (i = 0; i < length; i++)
+ rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint8_t zxdh_get_status(struct zxdh_hw *hw)
+{
+ return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static uint64_t zxdh_get_features(struct zxdh_hw *hw)
+{
+ uint32_t features_lo = 0;
+ uint32_t features_hi = 0;
+
+ rte_write32(0, &hw->common_cfg->device_feature_select);
+ features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+ rte_write32(1, &hw->common_cfg->device_feature_select);
+ features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+ return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
+{
+ rte_write32(0, &hw->common_cfg->guest_feature_select);
+ rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature);
+ rte_write32(1, &hw->common_cfg->guest_feature_select);
+ rte_write32(features >> 32, &hw->common_cfg->guest_feature);
+}
+
+const struct zxdh_pci_ops zxdh_dev_pci_ops = {
+ .read_dev_cfg = zxdh_read_dev_config,
+ .write_dev_cfg = zxdh_write_dev_config,
+ .get_status = zxdh_get_status,
+ .set_status = zxdh_set_status,
+ .get_features = zxdh_get_features,
+ .set_features = zxdh_set_features,
+};
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_features(hw);
+}
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw)
+{
+ PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id);
+ uint32_t retry = 0;
+
+ VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET);
+ /* Flush status write and wait device ready max 3 seconds. */
+ while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) {
+ ++retry;
+ usleep(1000L);
+ }
+ PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
+}
+
+static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
+{
+ uint8_t bar = cap->bar;
+ uint32_t length = cap->length;
+ uint32_t offset = cap->offset;
+
+ if (bar >= PCI_MAX_RESOURCE) {
+ PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+ return NULL;
+ }
+ if (offset + length < offset) {
+ PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length);
+ return NULL;
+ }
+ if (offset + length > dev->mem_resource[bar].len) {
+ PMD_INIT_LOG(ERR, "invalid cap: overflows bar space");
+ return NULL;
+ }
+ uint8_t *base = dev->mem_resource[bar].addr;
+
+ if (base == NULL) {
+ PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+ return NULL;
+ }
+ return base + offset;
+}
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw)
+{
+ uint8_t pos = 0;
+ int32_t ret = 0;
+
+ if (dev->mem_resource[0].addr == NULL) {
+ PMD_INIT_LOG(ERR, "bar0 base addr is NULL");
+ return -1;
+ }
+
+ ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret);
+ return -1;
+ }
+ while (pos) {
+ struct zxdh_pci_cap cap;
+
+ ret = rte_pci_read_config(dev, &cap, 2, pos);
+ if (ret != 2) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap.cap_vndr == PCI_CAP_ID_MSIX) {
+ /**
+ * Transitional devices would also have this capability,
+ * that's why we also check if msix is enabled.
+ * 1st byte is cap ID; 2nd byte is the position of next cap;
+ * next two bytes are the flags.
+ */
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2);
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d",
+ pos + 2, ret);
+ break;
+ }
+ hw->use_msix = (flags & PCI_MSIX_ENABLE) ?
+ ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED;
+ }
+ if (cap.cap_vndr != PCI_CAP_ID_VNDR) {
+ PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x",
+ pos, cap.cap_vndr);
+ goto next;
+ }
+ ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+ pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+ switch (cap.cfg_type) {
+ case ZXDH_PCI_CAP_COMMON_CFG:
+ hw->common_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_NOTIFY_CFG: {
+ ret = rte_pci_read_config(dev, &hw->notify_off_multiplier,
+ 4, pos + sizeof(cap));
+ if (ret != 4)
+ PMD_INIT_LOG(ERR,
+ "failed to read notify_off_multiplier, ret %d", ret);
+ else
+ hw->notify_base = get_cfg_addr(dev, &cap);
+ break;
+ }
+ case ZXDH_PCI_CAP_DEVICE_CFG:
+ hw->dev_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_ISR_CFG:
+ hw->isr = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_PCI_CFG: {
+ hw->pcie_id = *(uint16_t *)&cap.padding[1];
+ PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id);
+ uint16_t pcie_id = hw->pcie_id;
+
+ if ((pcie_id >> 11) & 0x1) /* PF */ {
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7);
+ } else { /* VF */
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff);
+ }
+ break;
+ }
+ }
+next:
+ pos = cap.cap_next;
+ }
+ if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+ hw->dev_cfg == NULL || hw->isr == NULL) {
+ PMD_INIT_LOG(ERR, "no zxdh pci device found.");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length)
+{
+ VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
+{
+ uint64_t guest_features = 0;
+ uint64_t nego_features = 0;
+ uint32_t max_queue_pairs = 0;
+
+ hw->host_features = zxdh_vtpci_get_features(hw);
+
+ guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
+ nego_features = guest_features & hw->host_features;
+
+ hw->guest_features = nego_features;
+
+ if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
+ &hw->mac_addr, RTE_ETHER_ADDR_LEN);
+ } else {
+ rte_eth_random_addr(&hw->mac_addr[0]);
+ }
+
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
+ &max_queue_pairs, sizeof(max_queue_pairs));
+
+ if (max_queue_pairs == 0)
+ hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
+ else
+ hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
+ PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs);
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
new file mode 100644
index 0000000000..deda73a65a
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_PCI_H_
+#define _ZXDH_PCI_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#include <bus_pci_driver.h>
+
+#include "zxdh_ethdev.h"
+
+enum zxdh_msix_status {
+ ZXDH_MSIX_NONE = 0,
+ ZXDH_MSIX_DISABLED = 1,
+ ZXDH_MSIX_ENABLED = 2
+};
+
+#define PCI_CAPABILITY_LIST 0x34
+#define PCI_CAP_ID_VNDR 0x09
+#define PCI_CAP_ID_MSIX 0x11
+
+#define PCI_MSIX_ENABLE 0x8000
+
+#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
+#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
+#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
+#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */
+#define ZXDH_F_VERSION_1 32
+#define ZXDH_F_RING_PACKED 34
+#define ZXDH_F_IN_ORDER 35
+#define ZXDH_F_NOTIFICATION_DATA 38
+
+#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */
+#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */
+#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */
+#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */
+#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */
+
+/* Status byte for guest to report progress. */
+#define ZXDH_CONFIG_STATUS_RESET 0x00
+#define ZXDH_CONFIG_STATUS_ACK 0x01
+#define ZXDH_CONFIG_STATUS_DRIVER 0x02
+#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04
+#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
+#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
+#define ZXDH_CONFIG_STATUS_FAILED 0x80
+
+struct zxdh_net_config {
+ /* The config defining mac address (if ZXDH_NET_F_MAC) */
+ uint8_t mac[RTE_ETHER_ADDR_LEN];
+ /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */
+ uint16_t status;
+ uint16_t max_virtqueue_pairs;
+ uint16_t mtu;
+ /*
+ * speed, in units of 1Mb. All values 0 to INT_MAX are legal.
+ * Any other value stands for unknown.
+ */
+ uint32_t speed;
+ /* 0x00 - half duplex
+ * 0x01 - full duplex
+ * Any other value stands for unknown.
+ */
+ uint8_t duplex;
+} __rte_packed;
+
+/* This is the PCI capability header: */
+struct zxdh_pci_cap {
+ uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
+ uint8_t cap_next; /* Generic PCI field: next ptr. */
+ uint8_t cap_len; /* Generic PCI field: capability length */
+ uint8_t cfg_type; /* Identifies the structure. */
+ uint8_t bar; /* Where to find it. */
+ uint8_t padding[3]; /* Pad to full dword. */
+ uint32_t offset; /* Offset within bar. */
+ uint32_t length; /* Length of the structure, in bytes. */
+};
+
+/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */
+struct zxdh_pci_common_cfg {
+ /* About the whole device. */
+ uint32_t device_feature_select; /* read-write */
+ uint32_t device_feature; /* read-only */
+ uint32_t guest_feature_select; /* read-write */
+ uint32_t guest_feature; /* read-write */
+ uint16_t msix_config; /* read-write */
+ uint16_t num_queues; /* read-only */
+ uint8_t device_status; /* read-write */
+ uint8_t config_generation; /* read-only */
+
+ /* About a specific virtqueue. */
+ uint16_t queue_select; /* read-write */
+ uint16_t queue_size; /* read-write, power of 2. */
+ uint16_t queue_msix_vector; /* read-write */
+ uint16_t queue_enable; /* read-write */
+ uint16_t queue_notify_off; /* read-only */
+ uint32_t queue_desc_lo; /* read-write */
+ uint32_t queue_desc_hi; /* read-write */
+ uint32_t queue_avail_lo; /* read-write */
+ uint32_t queue_avail_hi; /* read-write */
+ uint32_t queue_used_lo; /* read-write */
+ uint32_t queue_used_hi; /* read-write */
+};
+
+static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
+{
+ return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+struct zxdh_pci_ops {
+ void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
+ void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
+
+ uint8_t (*get_status)(struct zxdh_hw *hw);
+ void (*set_status)(struct zxdh_hw *hw, uint8_t status);
+
+ uint64_t (*get_features)(struct zxdh_hw *hw);
+ void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+};
+
+struct zxdh_hw_internal {
+ const struct zxdh_pci_ops *vtpci_ops;
+};
+
+#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops)
+
+extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+extern const struct zxdh_pci_ops zxdh_dev_pci_ops;
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw);
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset,
+ void *dst, int32_t length);
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_PCI_H_ */
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
new file mode 100644
index 0000000000..336c0701f4
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_QUEUE_H_
+#define _ZXDH_QUEUE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_rxtx.h"
+
+/** ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ **/
+struct vring_desc {
+ uint64_t addr; /* Address (guest-physical). */
+ uint32_t len; /* Length. */
+ uint16_t flags; /* The flags as indicated above. */
+ uint16_t next; /* We chain unused descriptors via this. */
+};
+
+struct vring_avail {
+ uint16_t flags;
+ uint16_t idx;
+ uint16_t ring[0];
+};
+
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ uint32_t num;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
+struct vq_desc_extra {
+ void *cookie;
+ uint16_t ndescs;
+ uint16_t next;
+};
+
+struct virtqueue {
+ struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ uint8_t used_wrap_counter;
+ uint8_t rsv;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ uint16_t rsv1;
+ } __rte_packed vq_packed;
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
+ uint16_t vq_nentries; /**< vring desc numbers */
+ uint16_t vq_free_cnt; /**< num of desc available */
+ uint16_t vq_avail_idx; /**< sync until needed */
+ uint16_t vq_free_thresh; /**< free threshold */
+ uint16_t rsv2;
+
+ void *vq_ring_virt_mem; /**< linear address of vring*/
+ uint32_t vq_ring_size;
+
+ union {
+ struct virtnet_rx rxq;
+ struct virtnet_tx txq;
+ };
+
+ /** < physical address of vring,
+ * or virtual address for virtio_user.
+ **/
+ rte_iova_t vq_ring_mem;
+
+ /**
+ * Head of the free chain in the descriptor table. If
+ * there are no free descriptors, this will be set to
+ * VQ_RING_DESC_CHAIN_END.
+ **/
+ uint16_t vq_desc_head_idx;
+ uint16_t vq_desc_tail_idx;
+ uint16_t vq_queue_index; /**< PCI queue index */
+ uint16_t offset; /**< relative offset to obtain addr in mbuf */
+ uint16_t *notify_addr;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct vq_desc_extra vq_descx[0];
+};
+
+#endif /* _ZXDH_QUEUE_H_ */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
new file mode 100644
index 0000000000..7314f76d2c
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 ZTE Corporation
+ */
+
+#ifndef _ZXDH_RXTX_H_
+#define _ZXDH_RXTX_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf_core.h>
+
+struct virtnet_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t errors;
+ uint64_t multicast;
+ uint64_t broadcast;
+ uint64_t truncated_err;
+ uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+};
+
+struct virtnet_rx {
+ struct virtqueue *vq;
+
+ /* dummy mbuf, for wraparound when processing RX ring. */
+ struct rte_mbuf fake_mbuf;
+
+ uint64_t mbuf_initializer; /* value to init mbufs. */
+ struct rte_mempool *mpool; /* mempool for mbuf allocation */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate RX ring. */
+};
+
+struct virtnet_tx {
+ struct virtqueue *vq;
+ const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate TX ring. */
+};
+
+#endif /* _ZXDH_RXTX_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53727 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
` (4 subsequent siblings)
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 8734 bytes --]
Add msg channel and hwlock init implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 15 +++
drivers/net/zxdh/zxdh_ethdev.h | 1 +
drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 65 +++++++++++++
5 files changed, 243 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 7db4e7bc71..2e0c8fddae 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -16,4 +16,5 @@ endif
sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
+ 'zxdh_msg.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index f34b2af7b3..e69d11e9fd 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -9,6 +9,7 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
#include "zxdh_pci.h"
+#include "zxdh_msg.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret < 0)
goto err_zxdh_init;
+ ret = zxdh_msg_chan_init();
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "Failed to init bar msg chan");
+ goto err_zxdh_init;
+ }
+ hw->msg_chan_init = 1;
+
+ ret = zxdh_msg_chan_hwlock_init(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
+ zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
return ret;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 18d9916713..24eb3a5ca0 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -51,6 +51,7 @@ struct zxdh_hw {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
+ uint8_t msg_chan_init;
};
#ifdef __cplusplus
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
new file mode 100644
index 0000000000..f2387803fc
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <pthread.h>
+#include <rte_cycles.h>
+#include <inttypes.h>
+#include <rte_malloc.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+
+#define REPS_INFO_FLAG_USABLE 0x00
+#define BAR_SEQID_NUM_MAX 256
+
+#define ZXDH_BAR0_INDEX 0
+
+#define PCIEID_IS_PF_MASK (0x0800)
+#define PCIEID_PF_IDX_MASK (0x0700)
+#define PCIEID_VF_IDX_MASK (0x00ff)
+#define PCIEID_EP_IDX_MASK (0x7000)
+/* PCIEID bit field offset */
+#define PCIEID_PF_IDX_OFFSET (8)
+#define PCIEID_EP_IDX_OFFSET (12)
+
+#define MULTIPLY_BY_8(x) ((x) << 3)
+#define MULTIPLY_BY_32(x) ((x) << 5)
+#define MULTIPLY_BY_256(x) ((x) << 8)
+
+#define MAX_EP_NUM (4)
+#define MAX_HARD_SPINLOCK_NUM (511)
+
+#define BAR0_SPINLOCK_OFFSET (0x4000)
+#define FW_SHRD_OFFSET (0x5000)
+#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
+
+struct dev_stat {
+ bool is_mpf_scanned;
+ bool is_res_init;
+ int16_t dev_cnt; /* probe cnt */
+};
+struct dev_stat g_dev_stat = {0};
+
+struct seqid_item {
+ void *reps_addr;
+ uint16_t id;
+ uint16_t buffer_len;
+ uint16_t flag;
+};
+
+struct seqid_ring {
+ uint16_t cur_id;
+ pthread_spinlock_t lock;
+ struct seqid_item reps_info_tbl[BAR_SEQID_NUM_MAX];
+};
+struct seqid_ring g_seqid_ring = {0};
+
+static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+{
+ uint16_t lock_id = 0;
+ uint16_t pf_idx = (src_pcieid & PCIEID_PF_IDX_MASK) >> PCIEID_PF_IDX_OFFSET;
+ uint16_t ep_idx = (src_pcieid & PCIEID_EP_IDX_MASK) >> PCIEID_EP_IDX_OFFSET;
+
+ switch (dst) {
+ /* msg to risc */
+ case MSG_CHAN_END_RISC:
+ lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx;
+ break;
+ /* msg to pf/vf */
+ case MSG_CHAN_END_VF:
+ case MSG_CHAN_END_PF:
+ lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx + MULTIPLY_BY_8(1 + MAX_EP_NUM);
+ break;
+ default:
+ lock_id = 0;
+ break;
+ }
+ if (lock_id >= MAX_HARD_SPINLOCK_NUM)
+ lock_id = 0;
+
+ return lock_id;
+}
+
+static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value)
+{
+ *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value;
+}
+
+static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data)
+{
+ *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
+}
+
+static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
+{
+ label_write((uint64_t)label_addr, virt_lock_id, 0);
+ spinlock_write(virt_addr, virt_lock_id, 0);
+ return 0;
+}
+
+/**
+ * Fun: PF init hard_spinlock addr
+ */
+static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
+{
+ int lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_RISC);
+
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + HW_LABEL_OFFSET);
+ lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_VF);
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + HW_LABEL_OFFSET);
+ return 0;
+}
+
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->is_pf)
+ return 0;
+ return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
+}
+
+pthread_spinlock_t chan_lock;
+int zxdh_msg_chan_init(void)
+{
+ uint16_t seq_id = 0;
+
+ g_dev_stat.dev_cnt++;
+ if (g_dev_stat.is_res_init)
+ return BAR_MSG_OK;
+
+ pthread_spin_init(&chan_lock, 0);
+ g_seqid_ring.cur_id = 0;
+ pthread_spin_init(&g_seqid_ring.lock, 0);
+
+ for (seq_id = 0; seq_id < BAR_SEQID_NUM_MAX; seq_id++) {
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id];
+
+ reps_info->id = seq_id;
+ reps_info->flag = REPS_INFO_FLAG_USABLE;
+ }
+ g_dev_stat.is_res_init = true;
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_msg_chan_exit(void)
+{
+ if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0))
+ return BAR_MSG_OK;
+
+ g_dev_stat.is_res_init = false;
+ return BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
new file mode 100644
index 0000000000..a619e6ae21
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_MSG_H_
+#define _ZXDH_MSG_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <ethdev_driver.h>
+
+enum DRIVER_TYPE {
+ MSG_CHAN_END_MPF = 0,
+ MSG_CHAN_END_PF,
+ MSG_CHAN_END_VF,
+ MSG_CHAN_END_RISC,
+};
+
+enum BAR_MSG_RTN {
+ BAR_MSG_OK = 0,
+ BAR_MSG_ERR_MSGID,
+ BAR_MSG_ERR_NULL,
+ BAR_MSG_ERR_TYPE, /* Message type exception */
+ BAR_MSG_ERR_MODULE, /* Module ID exception */
+ BAR_MSG_ERR_BODY_NULL, /* Message body exception */
+ BAR_MSG_ERR_LEN, /* Message length exception */
+ BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
+ BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
+ BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
+ BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
+ BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
+ /**
+ * The sending interface parameter boundary structure pointer is empty
+ */
+ BAR_MSG_ERR_NULL_PARA,
+ BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
+ /**
+ * Unable to find the corresponding message processing function for this module
+ */
+ BAR_MSG_ERR_MODULE_NOEXIST,
+ /**
+ * The virtual address in the parameters passed in by the sending interface is empty
+ */
+ BAR_MSG_ERR_VIRTADDR_NULL,
+ BAR_MSG_ERR_REPLY, /* sync msg resp_error */
+ BAR_MSG_ERR_MPF_NOT_SCANNED,
+ BAR_MSG_ERR_KERNEL_READY,
+ BAR_MSG_ERR_USR_RET_ERR,
+ BAR_MSG_ERR_ERR_PCIEID,
+ BAR_MSG_ERR_SOCKET, /* netlink sockte err */
+};
+
+int zxdh_msg_chan_init(void);
+int zxdh_bar_msg_chan_exit(void);
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_MSG_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 17190 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 5/9] net/zxdh: add msg chan enable implementation
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-16 8:18 ` [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-21 8:50 ` Thomas Monjalon
2024-10-21 10:56 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
` (3 subsequent siblings)
6 siblings, 2 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 26958 bytes --]
Add msg chan enable implementation to support
send msg to get infos.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 6 +
drivers/net/zxdh/zxdh_ethdev.h | 12 +
drivers/net/zxdh/zxdh_msg.c | 643 ++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_msg.h | 127 +++++++
4 files changed, 787 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index e69d11e9fd..5002e76e23 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_msg_chan_enable(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 24eb3a5ca0..a51181f1ce 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -29,10 +29,22 @@ extern "C" {
#define ZXDH_NUM_BARS 2
+union VPORT {
+ uint16_t vport;
+ struct {
+ uint16_t vfid:8;
+ uint16_t pfid:3;
+ uint16_t vf_flag:1;
+ uint16_t epid:3;
+ uint16_t direct_flag:1;
+ };
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ union VPORT vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
uint64_t host_features;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index f2387803fc..3ac4c8d796 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -32,13 +32,85 @@
#define MULTIPLY_BY_32(x) ((x) << 5)
#define MULTIPLY_BY_256(x) ((x) << 8)
-#define MAX_EP_NUM (4)
+#define MAX_EP_NUM (4)
#define MAX_HARD_SPINLOCK_NUM (511)
+#define LOCK_PRIMARY_ID_MASK (0x8000)
+/* bar offset */
+#define BAR0_CHAN_RISC_OFFSET (0x2000)
+#define BAR0_CHAN_PFVF_OFFSET (0x3000)
#define BAR0_SPINLOCK_OFFSET (0x4000)
#define FW_SHRD_OFFSET (0x5000)
#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET)
+#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET)
+#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET)
+#define CHAN_PFVF_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_PFVF_OFFSET)
+
+#define REPS_HEADER_LEN_OFFSET 1
+#define REPS_HEADER_PAYLOAD_OFFSET 4
+#define REPS_HEADER_REPLYED 0xff
+
+#define BAR_MSG_CHAN_USABLE 0
+#define BAR_MSG_CHAN_USED 1
+
+#define BAR_MSG_POL_MASK (0x10)
+#define BAR_MSG_POL_OFFSET (4)
+
+#define BAR_ALIGN_WORD_MASK 0xfffffffc
+#define BAR_MSG_VALID_MASK 1
+#define BAR_MSG_VALID_OFFSET 0
+
+#define REPS_INFO_FLAG_USABLE 0x00
+#define REPS_INFO_FLAG_USED 0xa0
+
+#define BAR_PF_NUM 7
+#define BAR_VF_NUM 256
+#define BAR_INDEX_PF_TO_VF 0
+#define BAR_INDEX_MPF_TO_MPF 0xff
+#define BAR_INDEX_MPF_TO_PFVF 0
+#define BAR_INDEX_PFVF_TO_MPF 0
+
+#define MAX_HARD_SPINLOCK_ASK_TIMES (1000)
+#define SPINLOCK_POLLING_SPAN_US (100)
+
+#define BAR_MSG_SRC_NUM 3
+#define BAR_MSG_SRC_MPF 0
+#define BAR_MSG_SRC_PF 1
+#define BAR_MSG_SRC_VF 2
+#define BAR_MSG_SRC_ERR 0xff
+#define BAR_MSG_DST_NUM 3
+#define BAR_MSG_DST_RISC 0
+#define BAR_MSG_DST_MPF 2
+#define BAR_MSG_DST_PFVF 1
+#define BAR_MSG_DST_ERR 0xff
+
+#define LOCK_TYPE_HARD (1)
+#define LOCK_TYPE_SOFT (0)
+#define BAR_INDEX_TO_RISC 0
+
+#define BAR_SUBCHAN_INDEX_SEND 0
+#define BAR_SUBCHAN_INDEX_RECV 1
+
+uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND},
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV},
+ {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV, BAR_SUBCHAN_INDEX_RECV}
+};
+
+uint8_t chan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {BAR_INDEX_TO_RISC, BAR_INDEX_MPF_TO_PFVF, BAR_INDEX_MPF_TO_MPF},
+ {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF},
+ {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF}
+};
+
+uint8_t lock_type_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
+ {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD},
+ {LOCK_TYPE_SOFT, LOCK_TYPE_SOFT, LOCK_TYPE_HARD},
+ {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD}
+};
struct dev_stat {
bool is_mpf_scanned;
@@ -97,6 +169,11 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da
*(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
}
+static uint8_t spinklock_read(uint64_t virt_lock_addr, uint32_t lock_id)
+{
+ return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id);
+}
+
static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
{
label_write((uint64_t)label_addr, virt_lock_id, 0);
@@ -104,6 +181,28 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u
return 0;
}
+static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
+ uint64_t label_addr, uint16_t primary_id)
+{
+ uint32_t lock_rd_cnt = 0;
+
+ do {
+ /* read to lock */
+ uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id);
+
+ if (spl_val == 0) {
+ label_write((uint64_t)label_addr, virt_lock_id, primary_id);
+ break;
+ }
+ rte_delay_us_block(SPINLOCK_POLLING_SPAN_US);
+ lock_rd_cnt++;
+ } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES);
+ if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES)
+ return -1;
+
+ return 0;
+}
+
/**
* Fun: PF init hard_spinlock addr
*/
@@ -119,6 +218,548 @@ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
return 0;
}
+static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid)
+{
+ struct seqid_item *seqid_reps_info = NULL;
+
+ pthread_spin_lock(&g_seqid_ring.lock);
+ uint16_t g_id = g_seqid_ring.cur_id;
+ uint16_t count = 0;
+
+ do {
+ count++;
+ ++g_id;
+ g_id %= BAR_SEQID_NUM_MAX;
+ seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id];
+ } while ((seqid_reps_info->flag != REPS_INFO_FLAG_USABLE) && (count < BAR_SEQID_NUM_MAX));
+ int rc;
+
+ if (count >= BAR_SEQID_NUM_MAX) {
+ rc = -1;
+ goto out;
+ }
+ seqid_reps_info->flag = REPS_INFO_FLAG_USED;
+ g_seqid_ring.cur_id = g_id;
+ *msgid = g_id;
+ rc = BAR_MSG_OK;
+
+out:
+ pthread_spin_unlock(&g_seqid_ring.lock);
+ return rc;
+}
+
+static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id)
+{
+ int ret = zxdh_bar_chan_msgid_allocate(msg_id);
+
+ if (ret != BAR_MSG_OK)
+ return BAR_MSG_ERR_MSGID;
+
+ PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id);
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id];
+
+ reps_info->reps_addr = result->recv_buffer;
+ reps_info->buffer_len = result->buffer_len;
+ return BAR_MSG_OK;
+}
+
+static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src)
+{
+ uint8_t src_index = 0;
+
+ switch (src) {
+ case MSG_CHAN_END_MPF:
+ src_index = BAR_MSG_SRC_MPF;
+ break;
+ case MSG_CHAN_END_PF:
+ src_index = BAR_MSG_SRC_PF;
+ break;
+ case MSG_CHAN_END_VF:
+ src_index = BAR_MSG_SRC_VF;
+ break;
+ default:
+ src_index = BAR_MSG_SRC_ERR;
+ break;
+ }
+ return src_index;
+}
+
+static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst)
+{
+ uint8_t dst_index = 0;
+
+ switch (dst) {
+ case MSG_CHAN_END_MPF:
+ dst_index = BAR_MSG_DST_MPF;
+ break;
+ case MSG_CHAN_END_PF:
+ dst_index = BAR_MSG_DST_PFVF;
+ break;
+ case MSG_CHAN_END_VF:
+ dst_index = BAR_MSG_DST_PFVF;
+ break;
+ case MSG_CHAN_END_RISC:
+ dst_index = BAR_MSG_DST_RISC;
+ break;
+ default:
+ dst_index = BAR_MSG_SRC_ERR;
+ break;
+ }
+ return dst_index;
+}
+
+static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result)
+{
+ uint8_t src_index = 0;
+ uint8_t dst_index = 0;
+
+ if (in == NULL || result == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null para.");
+ return BAR_MSG_ERR_NULL_PARA;
+ }
+ src_index = zxdh_bar_msg_src_index_trans(in->src);
+ dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist.");
+ return BAR_MSG_ERR_TYPE;
+ }
+ if (in->module_id >= BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ if (in->payload_addr == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null message.");
+ return BAR_MSG_ERR_BODY_NULL;
+ }
+ if (in->payload_len > BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len);
+ return BAR_MSG_ERR_LEN;
+ }
+ if (in->virt_addr == 0 || result->recv_buffer == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL.");
+ return BAR_MSG_ERR_VIRTADDR_NULL;
+ }
+ if (result->buffer_len < REPS_HEADER_PAYLOAD_OFFSET)
+ PMD_MSG_LOG(ERR,
+ "recv buffer len is short than minimal 4 bytes.");
+
+ return BAR_MSG_OK;
+}
+
+static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id)
+{
+ return virt_addr + (2 * chan_id + subchan_id) * BAR_MSG_ADDR_CHAN_INTERVAL;
+}
+
+static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+ uint16_t chan_id = chan_id_tbl[src_index][dst_index];
+ uint16_t subchan_id = subchan_id_tbl[src_index][dst_index];
+
+ *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id);
+ return BAR_MSG_OK;
+}
+
+static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == MSG_CHAN_END_RISC)
+ ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + CHAN_RISC_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+ else
+ ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + CHAN_PFVF_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+
+ return ret;
+}
+
+static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == MSG_CHAN_END_RISC)
+ zxdh_spinlock_unlock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + CHAN_RISC_LABEL_OFFSET);
+ else
+ zxdh_spinlock_unlock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + CHAN_PFVF_LABEL_OFFSET);
+}
+
+pthread_spinlock_t chan_lock;
+static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.");
+ return BAR_MSG_ERR_TYPE;
+ }
+ uint16_t idx = lock_type_tbl[src_index][dst_index];
+
+ if (idx == LOCK_TYPE_SOFT)
+ pthread_spin_lock(&chan_lock);
+ else
+ ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr);
+
+ if (ret != 0)
+ PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid);
+
+ return ret;
+}
+
+static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.");
+ return BAR_MSG_ERR_TYPE;
+ }
+ uint16_t idx = lock_type_tbl[src_index][dst_index];
+
+ if (idx == LOCK_TYPE_SOFT)
+ pthread_spin_unlock(&chan_lock);
+ else
+ zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr);
+
+ return BAR_MSG_OK;
+}
+
+static void zxdh_bar_chan_msgid_free(uint16_t msg_id)
+{
+ struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ pthread_spin_lock(&g_seqid_ring.lock);
+ seqid_reps_info->flag = REPS_INFO_FLAG_USABLE;
+ PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id);
+ pthread_spin_unlock(&g_seqid_ring.lock);
+}
+
+static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data)
+{
+ uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *(uint32_t *)(subchan_addr + algin_offset) = data;
+ return 0;
+}
+
+static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata)
+{
+ uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *pdata = *(uint32_t *)(subchan_addr + algin_offset);
+ return 0;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr,
+ struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx));
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr,
+ struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx);
+
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_write(subchan_addr,
+ 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, *(data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ for (ix = 0; ix < remain; ix++)
+ remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix);
+
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * count +
+ BAR_MSG_PLAYLOAD_OFFSET, remain_data);
+ }
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, (data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * count +
+ BAR_MSG_PLAYLOAD_OFFSET, &remain_data);
+ for (ix = 0; ix < remain; ix++)
+ *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix);
+ }
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ data &= (~BAR_MSG_VALID_MASK);
+ data |= (uint32_t)valid_label;
+ zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data);
+ return BAR_MSG_OK;
+}
+
+static uint8_t temp_msg[BAR_MSG_ADDR_CHAN_INTERVAL];
+static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr,
+ uint16_t payload_len, struct bar_msg_header *msg_header)
+{
+ uint16_t ret = 0;
+ ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header);
+
+ ret = zxdh_bar_chan_msg_header_get(subchan_addr,
+ (struct bar_msg_header *)temp_msg);
+
+ ret = zxdh_bar_chan_msg_payload_set(subchan_addr,
+ (uint8_t *)(payload_addr), payload_len);
+
+ ret = zxdh_bar_chan_msg_payload_get(subchan_addr,
+ temp_msg, payload_len);
+
+ ret = zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USED);
+ return ret;
+}
+
+static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ if (BAR_MSG_CHAN_USABLE == (data & BAR_MSG_VALID_MASK))
+ return BAR_MSG_CHAN_USABLE;
+
+ return BAR_MSG_CHAN_USED;
+}
+
+static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data);
+ data &= (~(uint32_t)BAR_MSG_POL_MASK);
+ data |= ((uint32_t)label << BAR_MSG_POL_OFFSET);
+ zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data);
+ return BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr,
+ uint64_t recv_buffer, uint16_t buffer_len)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t msg_id = 0;
+ uint16_t msg_len = 0;
+
+ zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header);
+ msg_id = msg_header.msg_id;
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ if (reps_info->flag != REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id);
+ return BAR_MSG_ERR_REPLY;
+ }
+ msg_len = msg_header.len;
+
+ if (msg_len > buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u",
+ buffer_len, msg_len + 4);
+ return BAR_MSG_ERR_REPSBUFF_LEN;
+ }
+ uint8_t *recv_msg = (uint8_t *)recv_buffer;
+
+ zxdh_bar_chan_msg_payload_get(subchan_addr,
+ recv_msg + REPS_HEADER_PAYLOAD_OFFSET, msg_len);
+ *(uint16_t *)(recv_msg + REPS_HEADER_LEN_OFFSET) = msg_len;
+ *recv_msg = REPS_HEADER_REPLYED; /* set reps's valid */
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t seq_id = 0;
+ uint64_t subchan_addr = 0;
+ uint32_t time_out_cnt = 0;
+ uint16_t valid = 0;
+ int ret = 0;
+
+ ret = zxdh_bar_chan_send_para_check(in, result);
+ if (ret != BAR_MSG_OK)
+ goto exit;
+
+ ret = zxdh_bar_chan_save_recv_info(result, &seq_id);
+ if (ret != BAR_MSG_OK)
+ goto exit;
+
+ zxdh_bar_chan_subchan_addr_get(in, &subchan_addr);
+
+ msg_header.sync = BAR_CHAN_MSG_SYNC;
+ msg_header.emec = in->emec;
+ msg_header.usr = 0;
+ msg_header.rsv = 0;
+ msg_header.module_id = in->module_id;
+ msg_header.len = in->payload_len;
+ msg_header.msg_id = seq_id;
+ msg_header.src_pcieid = in->src_pcieid;
+ msg_header.dst_pcieid = in->dst_pcieid;
+
+ ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+ if (ret != BAR_MSG_OK) {
+ zxdh_bar_chan_msgid_free(seq_id);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header);
+
+ do {
+ rte_delay_us_block(BAR_MSG_POLLING_SPAN);
+ valid = zxdh_bar_msg_valid_stat_get(subchan_addr);
+ ++time_out_cnt;
+ } while (time_out_cnt < BAR_MSG_TIMEOUT_TH && valid == BAR_MSG_CHAN_USED);
+
+ if (time_out_cnt == BAR_MSG_TIMEOUT_TH && valid != BAR_MSG_CHAN_USABLE) {
+ zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USABLE);
+ zxdh_bar_chan_msg_poltag_set(subchan_addr, 0);
+ PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out.");
+ ret = BAR_MSG_ERR_TIME_OUT;
+ } else {
+ ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr,
+ (uint64_t)result->recv_buffer, result->buffer_len);
+ }
+ zxdh_bar_chan_msgid_free(seq_id);
+ zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+
+exit:
+ return ret;
+}
+
+static int bar_get_sum(uint8_t *ptr, uint8_t len)
+{
+ uint64_t sum = 0;
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ sum += *(ptr + idx);
+
+ return (uint16_t)sum;
+}
+
+static int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport)
+{
+ struct bar_recv_msg recv_msg = {0};
+ int ret = 0;
+ int check_token = 0;
+ int sum_res = 0;
+
+ if (!_msix_para)
+ return BAR_MSG_ERR_NULL;
+
+ struct msix_msg msix_msg = {
+ .pcie_id = _msix_para->pcie_id,
+ .vector_risc = _msix_para->vector_risc,
+ .vector_pfvf = _msix_para->vector_pfvf,
+ .vector_mpf = _msix_para->vector_mpf,
+ };
+ struct zxdh_pci_bar_msg in = {
+ .virt_addr = _msix_para->virt_addr,
+ .payload_addr = &msix_msg,
+ .payload_len = sizeof(msix_msg),
+ .emec = 0,
+ .src = _msix_para->driver_type,
+ .dst = MSG_CHAN_END_RISC,
+ .module_id = BAR_MODULE_MISX,
+ .src_pcieid = _msix_para->pcie_id,
+ .dst_pcieid = 0,
+ .usr = 0,
+ };
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+ if (ret != BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.msix_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token);
+ return BAR_MSG_ERR_REPLY;
+ }
+ *vport = recv_msg.msix_reps.vport;
+ PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", _msix_para->pcie_id);
+ return BAR_MSG_OK;
+}
+
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct msix_para misx_info = {
+ .vector_risc = MSIX_FROM_RISCV,
+ .vector_pfvf = MSIX_FROM_PFVF,
+ .vector_mpf = MSIX_FROM_MPF,
+ .pcie_id = hw->pcie_id,
+ .driver_type = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF,
+ .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET),
+ };
+
+ return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
+}
+
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index a619e6ae21..06be0f18c8 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -13,6 +13,19 @@ extern "C" {
#include <ethdev_driver.h>
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+
+#define BAR_MSG_POLLING_SPAN 100
+#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
+#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
+#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
+
+#define BAR_CHAN_MSG_SYNC 0
+
+#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
+#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header))
+#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header))
+
enum DRIVER_TYPE {
MSG_CHAN_END_MPF = 0,
MSG_CHAN_END_PF,
@@ -20,6 +33,13 @@ enum DRIVER_TYPE {
MSG_CHAN_END_RISC,
};
+enum MSG_VEC {
+ MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
+ MSIX_FROM_MPF,
+ MSIX_FROM_RISCV,
+ MSG_VEC_NUM,
+};
+
enum BAR_MSG_RTN {
BAR_MSG_OK = 0,
BAR_MSG_ERR_MSGID,
@@ -54,10 +74,117 @@ enum BAR_MSG_RTN {
BAR_MSG_ERR_SOCKET, /* netlink sockte err */
};
+enum bar_module_id {
+ BAR_MODULE_DBG = 0, /* 0: debug */
+ BAR_MODULE_TBL, /* 1: resource table */
+ BAR_MODULE_MISX, /* 2: config msix */
+ BAR_MODULE_SDA, /* 3: */
+ BAR_MODULE_RDMA, /* 4: */
+ BAR_MODULE_DEMO, /* 5: channel test */
+ BAR_MODULE_SMMU, /* 6: */
+ BAR_MODULE_MAC, /* 7: mac rx/tx stats */
+ BAR_MODULE_VDPA, /* 8: vdpa live migration */
+ BAR_MODULE_VQM, /* 9: vqm live migration */
+ BAR_MODULE_NP, /* 10: vf msg callback np */
+ BAR_MODULE_VPORT, /* 11: get vport */
+ BAR_MODULE_BDF, /* 12: get bdf */
+ BAR_MODULE_RISC_READY, /* 13: */
+ BAR_MODULE_REVERSE, /* 14: byte stream reverse */
+ BAR_MDOULE_NVME, /* 15: */
+ BAR_MDOULE_NPSDK, /* 16: */
+ BAR_MODULE_NP_TODO, /* 17: */
+ MODULE_BAR_MSG_TO_PF, /* 18: */
+ MODULE_BAR_MSG_TO_VF, /* 19: */
+
+ MODULE_FLASH = 32,
+ BAR_MODULE_OFFSET_GET = 33,
+ BAR_EVENT_OVS_WITH_VCB = 36,
+
+ BAR_MSG_MODULE_NUM = 100,
+};
+
+struct msix_para {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+ uint64_t virt_addr;
+ uint16_t driver_type; /* refer to DRIVER_TYPE */
+};
+
+struct msix_msg {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+};
+
+struct zxdh_pci_bar_msg {
+ uint64_t virt_addr; /* bar addr */
+ void *payload_addr;
+ uint16_t payload_len;
+ uint16_t emec;
+ uint16_t src; /* refer to BAR_DRIVER_TYPE */
+ uint16_t dst; /* refer to BAR_DRIVER_TYPE */
+ uint16_t module_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid;
+ uint16_t usr;
+};
+
+struct bar_msix_reps {
+ uint16_t pcie_id;
+ uint16_t check;
+ uint16_t vport;
+ uint16_t rsv;
+} __rte_packed;
+
+struct bar_offset_reps {
+ uint16_t check;
+ uint16_t rsv;
+ uint32_t offset;
+ uint32_t length;
+} __rte_packed;
+
+struct bar_recv_msg {
+ uint8_t reps_ok;
+ uint16_t reps_len;
+ uint8_t rsv;
+ /* */
+ union {
+ struct bar_msix_reps msix_reps;
+ struct bar_offset_reps offset_reps;
+ } __rte_packed;
+} __rte_packed;
+
+struct zxdh_msg_recviver_mem {
+ void *recv_buffer; /* first 4B is head, followed by payload */
+ uint64_t buffer_len;
+};
+
+struct bar_msg_header {
+ uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
+ uint8_t sync : 1;
+ uint8_t emec : 1; /* emergency? */
+ uint8_t ack : 1; /* ack msg? */
+ uint8_t poll : 1;
+ uint8_t usr : 1;
+ uint8_t rsv;
+ uint16_t module_id;
+ uint16_t len;
+ uint16_t msg_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid; /* used in PF-->VF */
+};
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result);
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 58939 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
` (2 preceding siblings ...)
2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-21 8:52 ` Thomas Monjalon
2024-10-16 8:18 ` [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
` (2 subsequent siblings)
6 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 13063 bytes --]
Add zxdh get device backend infos,
use msg chan to send msg get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 249 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_common.h | 30 ++++
drivers/net/zxdh/zxdh_ethdev.c | 35 +++++
drivers/net/zxdh/zxdh_ethdev.h | 5 +
drivers/net/zxdh/zxdh_msg.c | 3 -
drivers/net/zxdh/zxdh_msg.h | 24 ++++
7 files changed, 344 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 2e0c8fddae..a16db47f89 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -17,4 +17,5 @@ sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
'zxdh_msg.c',
+ 'zxdh_common.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
new file mode 100644
index 0000000000..61993980c3
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <string.h>
+
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+#include "zxdh_common.h"
+
+#define ZXDH_MSG_RSP_SIZE_MAX 512
+
+#define ZXDH_COMMON_TABLE_READ 0
+#define ZXDH_COMMON_TABLE_WRITE 1
+
+#define ZXDH_COMMON_FIELD_PHYPORT 6
+
+#define RSC_TBL_CONTENT_LEN_MAX (257 * 2)
+
+#define REPS_HEADER_PAYLOAD_OFFSET 4
+#define TBL_MSG_PRO_SUCCESS 0xaa
+
+struct zxdh_common_msg {
+ uint8_t type; /* 0:read table 1:write table */
+ uint8_t field;
+ uint16_t pcie_id;
+ uint16_t slen; /* Data length for write table */
+ uint16_t reserved;
+} __rte_packed;
+
+struct zxdh_common_rsp_hdr {
+ uint8_t rsp_status;
+ uint16_t rsp_len;
+ uint8_t reserved;
+ uint8_t payload_status;
+ uint8_t rsv;
+ uint16_t payload_len;
+} __rte_packed;
+
+struct tbl_msg_header {
+ uint8_t type; /* r/w */
+ uint8_t field;
+ uint16_t pcieid;
+ uint16_t slen;
+ uint16_t rsv;
+};
+struct tbl_msg_reps_header {
+ uint8_t check;
+ uint8_t rsv;
+ uint16_t len;
+};
+
+static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ uint8_t type,
+ uint8_t field,
+ void *buff,
+ uint16_t buff_size)
+{
+ uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size;
+
+ desc->payload_addr = rte_zmalloc(NULL, msg_len, 0);
+ if (unlikely(desc->payload_addr == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate msg_data");
+ return -ENOMEM;
+ }
+ memset(desc->payload_addr, 0, msg_len);
+ desc->payload_len = msg_len;
+ struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr;
+
+ msg_data->type = type;
+ msg_data->field = field;
+ msg_data->pcie_id = hw->pcie_id;
+ msg_data->slen = buff_size;
+ if (buff_size != 0)
+ rte_memcpy(msg_data + 1, buff, buff_size);
+
+ return 0;
+}
+
+static int32_t zxdh_send_command(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ enum bar_module_id module_id,
+ struct zxdh_msg_recviver_mem *msg_rsp)
+{
+ desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ desc->src = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF;
+ desc->dst = MSG_CHAN_END_RISC;
+ desc->module_id = module_id;
+ desc->src_pcieid = hw->pcie_id;
+
+ msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX;
+ msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0);
+ if (unlikely(msg_rsp->recv_buffer == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate messages response");
+ return -ENOMEM;
+ }
+
+ if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
+ rte_free(msg_rsp->recv_buffer);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp,
+ void *buff, uint16_t len)
+{
+ struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer;
+
+ if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) {
+ PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d",
+ rsp_hdr->payload_status, rsp_hdr->payload_len);
+ return -1;
+ }
+ if (len != 0)
+ rte_memcpy(buff, rsp_hdr + 1, len);
+
+ return 0;
+}
+
+static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_msg_recviver_mem msg_rsp;
+ struct zxdh_pci_bar_msg desc;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT,
+ (void *)phyport, sizeof(*phyport));
+ return ret;
+}
+
+static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ param->pcie_id = hw->pcie_id;
+ param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
+ param->src_type = BAR_MODULE_TBL;
+}
+
+static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len)
+{
+ if (!res || !dev)
+ return BAR_MSG_ERR_NULL;
+
+ struct tbl_msg_header tbl_msg = {
+ .type = TBL_TYPE_READ,
+ .field = field,
+ .pcieid = dev->pcie_id,
+ .slen = 0,
+ .rsv = 0,
+ };
+
+ struct zxdh_pci_bar_msg in = {0};
+
+ in.virt_addr = dev->virt_addr;
+ in.payload_addr = &tbl_msg;
+ in.payload_len = sizeof(tbl_msg);
+ in.src = dev->src_type;
+ in.dst = MSG_CHAN_END_RISC;
+ in.module_id = BAR_MODULE_TBL;
+ in.src_pcieid = dev->pcie_id;
+
+ uint8_t recv_buf[RSC_TBL_CONTENT_LEN_MAX + 8] = {0};
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = recv_buf,
+ .buffer_len = sizeof(recv_buf),
+ };
+ int ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR,
+ "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ struct tbl_msg_reps_header *tbl_reps =
+ (struct tbl_msg_reps_header *)(recv_buf + REPS_HEADER_PAYLOAD_OFFSET);
+
+ if (tbl_reps->check != TBL_MSG_PRO_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ *len = tbl_reps->len;
+ memcpy(res,
+ (recv_buf + REPS_HEADER_PAYLOAD_OFFSET + sizeof(struct tbl_msg_reps_header)), *len);
+ return ret;
+}
+
+static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, TBL_FIELD_PNLID, &reps, &reps_len) != BAR_MSG_OK)
+ return -1;
+
+ *panel_id = reps;
+ return BAR_MSG_OK;
+}
+
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
+{
+ struct zxdh_res_para param;
+
+ zxdh_fill_res_para(dev, ¶m);
+ int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
new file mode 100644
index 0000000000..ec7011e820
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZXDH_COMMON_H_
+#define _ZXDH_COMMON_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+struct zxdh_res_para {
+ uint64_t virt_addr;
+ uint16_t pcie_id;
+ uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
+};
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _ZXDH_COMMON_H_ */
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 5002e76e23..e282012afb 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -10,9 +10,21 @@
#include "zxdh_logs.h"
#include "zxdh_pci.h"
#include "zxdh_msg.h"
+#include "zxdh_common.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+uint16_t vport_to_vfid(union VPORT v)
+{
+ /* epid > 4 is local soft queue. return 1192 */
+ if (v.epid > 4)
+ return 1192;
+ if (v.vf_flag)
+ return v.epid * 256 + v.vfid;
+ else
+ return (v.epid * 8 + v.pfid) + 1152;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
return ret;
}
+static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw)
+{
+ if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get phyport");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport);
+
+ hw->vfid = vport_to_vfid(hw->vport);
+
+ if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get panel_id");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id);
+
+ return 0;
+}
+
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_agent_comm(eth_dev, hw);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index a51181f1ce..2351393009 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -56,6 +56,7 @@ struct zxdh_hw {
uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint16_t vfid;
uint8_t *isr;
uint8_t weak_barriers;
@@ -63,9 +64,13 @@ struct zxdh_hw {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
+ uint8_t phyport;
+ uint8_t panel_id;
uint8_t msg_chan_init;
};
+uint16_t vport_to_vfid(union VPORT v);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 3ac4c8d796..e243f97703 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -18,8 +18,6 @@
#define REPS_INFO_FLAG_USABLE 0x00
#define BAR_SEQID_NUM_MAX 256
-#define ZXDH_BAR0_INDEX 0
-
#define PCIEID_IS_PF_MASK (0x0800)
#define PCIEID_PF_IDX_MASK (0x0700)
#define PCIEID_VF_IDX_MASK (0x00ff)
@@ -43,7 +41,6 @@
#define FW_SHRD_OFFSET (0x5000)
#define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
-#define ZXDH_CTRLCH_OFFSET (0x2000)
#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET)
#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET)
#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET)
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 06be0f18c8..7379f57d17 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -13,6 +13,9 @@ extern "C" {
#include <ethdev_driver.h>
+#define ZXDH_BAR0_INDEX 0
+
+#define ZXDH_CTRLCH_OFFSET (0x2000)
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
#define BAR_MSG_POLLING_SPAN 100
@@ -103,6 +106,27 @@ enum bar_module_id {
BAR_MSG_MODULE_NUM = 100,
};
+enum RES_TBL_FILED {
+ TBL_FIELD_PCIEID = 0,
+ TBL_FIELD_BDF = 1,
+ TBL_FIELD_MSGCH = 2,
+ TBL_FIELD_DATACH = 3,
+ TBL_FIELD_VPORT = 4,
+ TBL_FIELD_PNLID = 5,
+ TBL_FIELD_PHYPORT = 6,
+ TBL_FIELD_SERDES_NUM = 7,
+ TBL_FIELD_NP_PORT = 8,
+ TBL_FIELD_SPEED = 9,
+ TBL_FIELD_HASHID = 10,
+ TBL_FIELD_NON,
+};
+
+enum TBL_MSG_TYPE {
+ TBL_TYPE_READ,
+ TBL_TYPE_WRITE,
+ TBL_TYPE_NON,
+};
+
struct msix_para {
uint16_t pcie_id;
uint16_t vector_risc;
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 27639 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
` (3 preceding siblings ...)
2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
6 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 24394 bytes --]
configure zxdh intr include risc,dtb. and release intr.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 301 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 8 +
drivers/net/zxdh/zxdh_msg.c | 187 ++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 12 ++
drivers/net/zxdh/zxdh_pci.c | 62 +++++++
drivers/net/zxdh/zxdh_pci.h | 12 ++
6 files changed, 582 insertions(+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index e282012afb..fc141712aa 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -25,6 +25,302 @@ uint16_t vport_to_vfid(union VPORT v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
+ }
+}
+
+
+static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (rte_intr_ack(dev->intr_handle) < 0)
+ return -1;
+
+ hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
+
+ return 0;
+}
+
+static void zxdh_devconf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+
+ if (zxdh_intr_unmask(dev) < 0)
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+}
+
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_fromriscv_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler");
+ zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler");
+ zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_frompfvf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler");
+ zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler");
+ zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
+
+ tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
+ tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled)
+ return 0;
+
+ zxdh_intr_cb_unreg(dev);
+ if (rte_intr_disable(dev->intr_handle) < 0)
+ return -1;
+
+ hw->intr_enabled = 0;
+ return 0;
+}
+
+static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled) {
+ zxdh_intr_cb_reg(dev);
+ ret = rte_intr_enable(dev->intr_handle);
+ if (unlikely(ret))
+ PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
+
+ hw->intr_enabled = 1;
+ }
+ return ret;
+}
+
+static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
+
+ zxdh_queues_unbind_intr(dev);
+ zxdh_intr_disable(dev);
+
+ rte_intr_efd_disable(dev->intr_handle);
+ rte_intr_vec_list_free(dev->intr_handle);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return 0;
+}
+
+static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint8_t i;
+
+ if (!hw->risc_intr) {
+ PMD_INIT_LOG(ERR, " to allocate risc_intr");
+ hw->risc_intr = rte_zmalloc("risc_intr",
+ ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
+ if (hw->risc_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
+ return -ENOMEM;
+ }
+ }
+
+ for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
+ if (dev->intr_handle->efds[i] < 0) {
+ PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ return -1;
+ }
+
+ struct rte_intr_handle *intr_handle = hw->risc_intr + i;
+
+ intr_handle->fd = dev->intr_handle->efds[i];
+ intr_handle->type = dev->intr_handle->type;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->dtb_intr) {
+ hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
+ if (hw->dtb_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
+ return -ENOMEM;
+ }
+ }
+
+ if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
+ PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return -1;
+ }
+ hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
+ hw->dtb_intr->type = dev->intr_handle->type;
+ return 0;
+}
+
+static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+ uint16_t vec;
+
+ if (!dev->data->dev_conf.intr_conf.rxq) {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ i * 2, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ } else {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d",
+ i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec);
+ }
+ }
+ /* mask all txq intr */
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ return 0;
+}
+
+static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t ret = 0;
+
+ if (!rte_intr_cap_multiple(dev->intr_handle)) {
+ PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
+ return -ENOTSUP;
+ }
+ zxdh_intr_release(dev);
+ uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
+
+ if (dev->data->dev_conf.intr_conf.rxq)
+ nb_efd += dev->data->nb_rx_queues;
+
+ if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
+ PMD_INIT_LOG(ERR, "Fail to create eventfd");
+ return -1;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
+ if (zxdh_setup_risc_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ if (zxdh_setup_dtb_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_queues_bind_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_intr_enable(dev) < 0) {
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ return 0;
+
+free_intr_vec:
+ zxdh_intr_release(dev);
+ return ret;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -138,9 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret != 0)
goto err_zxdh_init;
+ ret = zxdh_configure_intr(eth_dev);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
+ zxdh_intr_release(eth_dev);
zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 2351393009..7c5f5940cb 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -11,6 +11,10 @@ extern "C" {
#include <rte_ether.h>
#include "ethdev_driver.h"
+#include <rte_interrupts.h>
+#include <eal_interrupts.h>
+
+#include "zxdh_queue.h"
/* ZXDH PCI vendor/device ID. */
#define PCI_VENDOR_ID_ZTE 0x1cf2
@@ -44,6 +48,9 @@ struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ struct rte_intr_handle *risc_intr;
+ struct rte_intr_handle *dtb_intr;
+ struct virtqueue **vqs;
union VPORT vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -60,6 +67,7 @@ struct zxdh_hw {
uint8_t *isr;
uint8_t weak_barriers;
+ uint8_t intr_enabled;
uint8_t use_msix;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index e243f97703..098e0a74ed 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -91,6 +91,12 @@
#define BAR_SUBCHAN_INDEX_SEND 0
#define BAR_SUBCHAN_INDEX_RECV 1
+#define BAR_CHAN_MSG_SYNC 0
+#define BAR_CHAN_MSG_NO_EMEC 0
+#define BAR_CHAN_MSG_EMEC 1
+#define BAR_CHAN_MSG_NO_ACK 0
+#define BAR_CHAN_MSG_ACK 1
+
uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = {
{BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND},
{BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV},
@@ -130,6 +136,36 @@ struct seqid_ring {
};
struct seqid_ring g_seqid_ring = {0};
+static inline const char *module_id_name(int val)
+{
+ switch (val) {
+ case BAR_MODULE_DBG: return "BAR_MODULE_DBG";
+ case BAR_MODULE_TBL: return "BAR_MODULE_TBL";
+ case BAR_MODULE_MISX: return "BAR_MODULE_MISX";
+ case BAR_MODULE_SDA: return "BAR_MODULE_SDA";
+ case BAR_MODULE_RDMA: return "BAR_MODULE_RDMA";
+ case BAR_MODULE_DEMO: return "BAR_MODULE_DEMO";
+ case BAR_MODULE_SMMU: return "BAR_MODULE_SMMU";
+ case BAR_MODULE_MAC: return "BAR_MODULE_MAC";
+ case BAR_MODULE_VDPA: return "BAR_MODULE_VDPA";
+ case BAR_MODULE_VQM: return "BAR_MODULE_VQM";
+ case BAR_MODULE_NP: return "BAR_MODULE_NP";
+ case BAR_MODULE_VPORT: return "BAR_MODULE_VPORT";
+ case BAR_MODULE_BDF: return "BAR_MODULE_BDF";
+ case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY";
+ case BAR_MODULE_REVERSE: return "BAR_MODULE_REVERSE";
+ case BAR_MDOULE_NVME: return "BAR_MDOULE_NVME";
+ case BAR_MDOULE_NPSDK: return "BAR_MDOULE_NPSDK";
+ case BAR_MODULE_NP_TODO: return "BAR_MODULE_NP_TODO";
+ case MODULE_BAR_MSG_TO_PF: return "MODULE_BAR_MSG_TO_PF";
+ case MODULE_BAR_MSG_TO_VF: return "MODULE_BAR_MSG_TO_VF";
+ case MODULE_FLASH: return "MODULE_FLASH";
+ case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET";
+ case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB";
+ default: return "NA";
+ }
+}
+
static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
@@ -797,3 +833,154 @@ int zxdh_bar_msg_chan_exit(void)
g_dev_stat.is_res_init = false;
return BAR_MSG_OK;
}
+
+static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+
+ return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+}
+
+static void zxdh_bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header,
+ uint8_t *receiver_buff)
+{
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
+
+ if (reps_info->flag != REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id);
+ return;
+ }
+ if (msg_header->len > reps_info->buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u",
+ reps_info->buffer_len, msg_header->len + 4);
+ goto free_id;
+ }
+ uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr;
+
+ rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len);
+ *(uint16_t *)(reps_buffer + 1) = msg_header->len;
+ *(uint8_t *)(reps_info->reps_addr) = REPS_HEADER_REPLYED;
+
+free_id:
+ zxdh_bar_chan_msgid_free(msg_header->msg_id);
+}
+
+zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[BAR_MSG_MODULE_NUM];
+static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header,
+ uint8_t *receiver_buff, void *dev)
+{
+ uint8_t *reps_buffer = rte_malloc(NULL, BAR_MSG_PAYLOAD_MAX_LEN, 0);
+
+ if (reps_buffer == NULL)
+ return;
+
+ zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id];
+ uint16_t reps_len = 0;
+
+ recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev);
+ msg_header->ack = BAR_CHAN_MSG_ACK;
+ msg_header->len = reps_len;
+ zxdh_bar_chan_msg_header_set(reply_addr, msg_header);
+ zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len);
+ zxdh_bar_chan_msg_valid_set(reply_addr, BAR_MSG_CHAN_USABLE);
+ rte_free(reps_buffer);
+}
+
+static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type,
+ uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+ uint64_t recv_rep_addr;
+
+ if (sync == BAR_CHAN_MSG_SYNC)
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+ else
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id);
+
+ return recv_rep_addr;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_check(struct bar_msg_header *msg_header)
+{
+ if (msg_header->valid != BAR_MSG_CHAN_USED) {
+ PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used.");
+ return BAR_MSG_ERR_MODULE;
+ }
+ uint8_t module_id = msg_header->module_id;
+
+ if (module_id >= (uint8_t)BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id);
+ return BAR_MSG_ERR_MODULE;
+ }
+ uint16_t len = msg_header->len;
+
+ if (len > BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len);
+ return BAR_MSG_ERR_LEN;
+ }
+ if (msg_recv_func_tbl[msg_header->module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register",
+ module_id_name(module_id), module_id);
+ return BAR_MSG_ERR_MODULE_NOEXIST;
+ }
+ return BAR_MSG_OK;
+}
+
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev)
+{
+ struct bar_msg_header msg_header = {0};
+ uint64_t recv_addr = 0;
+ uint16_t ret = 0;
+
+ recv_addr = zxdh_recv_addr_get(src, dst, virt_addr);
+ if (recv_addr == 0) {
+ PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst);
+ return -1;
+ }
+
+ zxdh_bar_chan_msg_header_get(recv_addr, &msg_header);
+ ret = zxdh_bar_chan_msg_header_check(&msg_header);
+
+ if (ret != BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret);
+ return -1;
+ }
+
+ uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0);
+ if (recved_msg == NULL) {
+ PMD_MSG_LOG(ERR, "malloc temp buff failed.");
+ return -1;
+ }
+ zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len);
+
+ uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr);
+
+ if (msg_header.sync == BAR_CHAN_MSG_SYNC) {
+ zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_valid_set(recv_addr, BAR_MSG_CHAN_USABLE);
+ if (msg_header.ack == BAR_CHAN_MSG_ACK) {
+ zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg);
+ goto exit;
+ }
+ return 0;
+
+exit:
+ rte_free(recved_msg);
+ return BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 7379f57d17..6c7bed86f1 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -16,8 +16,16 @@ extern "C" {
#define ZXDH_BAR0_INDEX 0
#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000)
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3
+#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM)
+#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1
+#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1)
+#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) /* 5 */
+#define ZXDH_QUEUE_INTR_VEC_NUM 256
+
#define BAR_MSG_POLLING_SPAN 100
#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
@@ -201,6 +209,9 @@ struct bar_msg_header {
uint16_t dst_pcieid; /* used in PF-->VF */
};
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len,
+ void *reps_buffer, uint16_t *reps_len, void *dev);
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
@@ -208,6 +219,7 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
struct zxdh_msg_recviver_mem *result);
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
#ifdef __cplusplus
}
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index e23dbcbef5..c63b7eee44 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -97,6 +97,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
rte_write32(features >> 32, &hw->common_cfg->guest_feature);
}
+static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec)
+{
+ rte_write16(vec, &hw->common_cfg->msix_config);
+ return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+ rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+ return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
+{
+ return rte_read8(hw->isr);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -104,8 +122,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_status = zxdh_set_status,
.get_features = zxdh_get_features,
.set_features = zxdh_set_features,
+ .set_queue_irq = zxdh_set_queue_irq,
+ .set_config_irq = zxdh_set_config_irq,
+ .get_isr = zxdh_get_isr,
};
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_isr(hw);
+}
+
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
{
return VTPCI_OPS(hw)->get_features(hw);
@@ -288,3 +314,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
return 0;
}
+
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev)
+{
+ uint8_t pos = 0;
+ int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret);
+ return ZXDH_MSIX_NONE;
+ }
+ while (pos) {
+ uint8_t cap[2] = {0};
+
+ ret = rte_pci_read_config(dev, cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap[0] == PCI_CAP_ID_MSIX) {
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap));
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR,
+ "failed to read pci cap at pos: %x ret %d", pos + 2, ret);
+ break;
+ }
+ if (flags & PCI_MSIX_ENABLE)
+ return ZXDH_MSIX_ENABLED;
+ else
+ return ZXDH_MSIX_DISABLED;
+ }
+ pos = cap[1];
+ }
+ return ZXDH_MSIX_NONE;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index deda73a65a..677dadd5c8 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -22,6 +22,13 @@ enum zxdh_msix_status {
ZXDH_MSIX_ENABLED = 2
};
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define ZXDH_PCI_ISR_INTR 0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define ZXDH_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define ZXDH_MSI_NO_VECTOR 0x7F
+
#define PCI_CAPABILITY_LIST 0x34
#define PCI_CAP_ID_VNDR 0x09
#define PCI_CAP_ID_MSIX 0x11
@@ -124,6 +131,9 @@ struct zxdh_pci_ops {
uint64_t (*get_features)(struct zxdh_hw *hw);
void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+ uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
+ uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
+ uint8_t (*get_isr)(struct zxdh_hw *hw);
};
struct zxdh_hw_internal {
@@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
#ifdef __cplusplus
}
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53068 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
` (4 preceding siblings ...)
2024-10-16 8:18 ` [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-21 8:54 ` Thomas Monjalon
2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
6 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3892 bytes --]
Add support for zxdh infos get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 62 +++++++++++++++++++++++++++++++++-
1 file changed, 61 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index fc141712aa..65b649a156 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -12,6 +12,9 @@
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#define ZXDH_MIN_RX_BUFSIZE 64
+#define ZXDH_MAX_RX_PKTLEN 14000U
+
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
uint16_t vport_to_vfid(union VPORT v)
@@ -25,6 +28,58 @@ uint16_t vport_to_vfid(union VPORT v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static uint32_t zxdh_dev_speed_capa_get(uint32_t speed)
+{
+ switch (speed) {
+ case RTE_ETH_SPEED_NUM_10G: return RTE_ETH_LINK_SPEED_10G;
+ case RTE_ETH_SPEED_NUM_20G: return RTE_ETH_LINK_SPEED_20G;
+ case RTE_ETH_SPEED_NUM_25G: return RTE_ETH_LINK_SPEED_25G;
+ case RTE_ETH_SPEED_NUM_40G: return RTE_ETH_LINK_SPEED_40G;
+ case RTE_ETH_SPEED_NUM_50G: return RTE_ETH_LINK_SPEED_50G;
+ case RTE_ETH_SPEED_NUM_56G: return RTE_ETH_LINK_SPEED_56G;
+ case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G;
+ case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G;
+ default: return 0;
+ }
+}
+
+static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ dev_info->speed_capa = zxdh_dev_speed_capa_get(hw->speed);
+ dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX);
+ dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX);
+ dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE;
+ dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN;
+ dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS;
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER);
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM);
+
+ return 0;
+}
+
static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
@@ -321,6 +376,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+/* dev_ops for zxdh, bare necessities for basic operation */
+static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_infos_get = zxdh_dev_infos_get,
+};
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -377,7 +437,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
struct zxdh_hw *hw = eth_dev->data->dev_private;
int ret = 0;
- eth_dev->dev_ops = NULL;
+ eth_dev->dev_ops = &zxdh_eth_dev_ops;
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 8397 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
` (5 preceding siblings ...)
2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
@ 2024-10-16 8:18 ` Junlong Wang
2024-10-18 5:18 ` [v6,9/9] " Junlong Wang
2024-10-19 11:17 ` Junlong Wang
6 siblings, 2 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 39355 bytes --]
provided zxdh dev configure ops for queue
check,reset,alloc resources,etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 119 +++++++++
drivers/net/zxdh/zxdh_common.h | 12 +
drivers/net/zxdh/zxdh_ethdev.c | 457 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 20 +-
drivers/net/zxdh/zxdh_pci.c | 97 +++++++
drivers/net/zxdh/zxdh_pci.h | 31 +++
drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++
drivers/net/zxdh/zxdh_queue.h | 176 +++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 4 +
10 files changed, 1045 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_queue.c
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index a16db47f89..b96aa5a27e 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -18,4 +18,5 @@ sources = files(
'zxdh_pci.c',
'zxdh_msg.c',
'zxdh_common.c',
+ 'zxdh_queue.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
index 61993980c3..ed4396a6af 100644
--- a/drivers/net/zxdh/zxdh_common.c
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -20,6 +20,7 @@
#define ZXDH_COMMON_TABLE_WRITE 1
#define ZXDH_COMMON_FIELD_PHYPORT 6
+#define ZXDH_COMMON_FIELD_DATACH 3
#define RSC_TBL_CONTENT_LEN_MAX (257 * 2)
@@ -247,3 +248,121 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
return ret;
}
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ uint32_t val = *((volatile uint32_t *)(baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ *((volatile uint32_t *)(baseaddr + reg)) = val;
+}
+
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ /* check whether lock is used */
+ if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
+ return -1;
+
+ return 0;
+}
+
+int32_t zxdh_release_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
+ var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
+ zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
+ return 0;
+ }
+
+ return -1;
+}
+
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg)
+{
+ uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val)
+{
+ *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val;
+}
+
+static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_pci_bar_msg desc;
+ struct zxdh_msg_recviver_mem msg_rsp;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+ if (buff_size != 0 && buff == NULL) {
+ PMD_DRV_LOG(ERR, "Buff is invalid");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE,
+ field, buff, buff_size);
+
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_datach_set(struct rte_eth_dev *dev)
+{
+ /* payload: queue_num(2byte) + pch1(2byte) + ** + pchn */
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t buff_size = (hw->queue_num + 1) * 2;
+ void *buff = rte_zmalloc(NULL, buff_size, 0);
+
+ if (unlikely(buff == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate buff");
+ return -ENOMEM;
+ }
+ memset(buff, 0, buff_size);
+ uint16_t *pdata = (uint16_t *)buff;
+ *pdata++ = hw->queue_num;
+ uint16_t i;
+
+ for (i = 0; i < hw->queue_num; i++)
+ *(pdata + i) = hw->channel_context[i].ph_chno;
+
+ int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH,
+ (void *)buff, buff_size);
+
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to setup data channel of common table");
+
+ rte_free(buff);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
index ec7011e820..24bbc7fee0 100644
--- a/drivers/net/zxdh/zxdh_common.h
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -14,6 +14,10 @@ extern "C" {
#include "zxdh_ethdev.h"
+#define ZXDH_VF_LOCK_REG 0x90
+#define ZXDH_VF_LOCK_ENABLE_MASK 0x1
+#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10
+
struct zxdh_res_para {
uint64_t virt_addr;
uint16_t pcie_id;
@@ -23,6 +27,14 @@ struct zxdh_res_para {
int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
+int32_t zxdh_release_lock(struct zxdh_hw *hw);
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw);
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg);
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val);
+int32_t zxdh_datach_set(struct rte_eth_dev *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 65b649a156..a1997facdb 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -11,6 +11,7 @@
#include "zxdh_pci.h"
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#include "zxdh_queue.h"
#define ZXDH_MIN_RX_BUFSIZE 64
#define ZXDH_MAX_RX_PKTLEN 14000U
@@ -376,8 +377,464 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+static int32_t zxdh_features_update(struct zxdh_hw *hw,
+ const struct rte_eth_rxmode *rxmode,
+ const struct rte_eth_txmode *txmode)
+{
+ uint64_t rx_offloads = rxmode->offloads;
+ uint64_t tx_offloads = txmode->offloads;
+ uint64_t req_features = hw->guest_features;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM);
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) |
+ (1ULL << ZXDH_NET_F_GUEST_TSO6);
+
+ if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_CSUM);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) |
+ (1ULL << ZXDH_NET_F_HOST_TSO6);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_UFO);
+
+ req_features = req_features & hw->host_features;
+ hw->guest_features = req_features;
+
+ VTPCI_OPS(hw)->set_features(hw, req_features);
+
+ if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) &&
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) {
+ PMD_DRV_LOG(ERR, "rx checksum not available on this host");
+ return -ENOTSUP;
+ }
+
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) {
+ PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+static bool rx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6);
+}
+
+static bool tx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO);
+}
+
+static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t i = 0;
+
+ const char *type = NULL;
+ struct virtqueue *vq = NULL;
+ struct rte_mbuf *buf = NULL;
+ int32_t queue_type = 0;
+
+ if (hw->vqs == NULL)
+ return;
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (!vq)
+ continue;
+
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == VTNET_RQ)
+ type = "rxq";
+ else if (queue_type == VTNET_TQ)
+ type = "txq";
+ else
+ continue;
+ PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
+
+ while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL)
+ rte_pktmbuf_free(buf);
+ }
+}
+
+static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t base = (queue_type == VTNET_RQ) ? 0 : 1;
+ uint16_t i = 0;
+ uint16_t j = 0;
+ uint16_t done = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ rte_delay_us_block(1000);
+ /* acquire hw lock */
+ if (zxdh_acquire_lock(hw) < 0) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout);
+ continue;
+ }
+ /* Iterate COI table and find free channel */
+ for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) {
+ uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t));
+ uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+
+ for (j = base; j < 32; j += 2) {
+ /* Got the available channel & update COI table */
+ if ((var & (1 << j)) == 0) {
+ var |= (1 << j);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+ done = 1;
+ break;
+ }
+ }
+ if (done)
+ break;
+ }
+ break;
+ }
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ zxdh_release_lock(hw);
+ /* check for no channel condition */
+ if (done != 1) {
+ PMD_INIT_LOG(ERR, "NO availd queues");
+ return -1;
+ }
+ /* return available channel ID */
+ return (i * 32) + j;
+}
+
+static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (hw->channel_context[lch].valid == 1) {
+ PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u",
+ lch, hw->channel_context[lch].ph_chno);
+ return hw->channel_context[lch].ph_chno;
+ }
+ int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch));
+
+ if (pch < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ hw->channel_context[lch].ph_chno = (uint16_t)pch;
+ hw->channel_context[lch].valid = 1;
+ PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch);
+ return 0;
+}
+
+static void zxdh_init_vring(struct virtqueue *vq)
+{
+ int32_t size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ virtqueue_disable_intr(vq);
+}
+
+static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx)
+{
+ char vq_name[VIRTQUEUE_MAX_NAME_SZ] = {0};
+ char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ] = {0};
+ const struct rte_memzone *mz = NULL;
+ const struct rte_memzone *hdr_mz = NULL;
+ uint32_t size = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct virtnet_rx *rxvq = NULL;
+ struct virtnet_tx *txvq = NULL;
+ struct virtqueue *vq = NULL;
+ size_t sz_hdr_mz = 0;
+ void *sw_ring = NULL;
+ int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx);
+ int32_t numa_node = dev->device->numa_node;
+ uint16_t vtpci_phy_qidx = 0;
+ uint32_t vq_size = 0;
+ int32_t ret = 0;
+
+ if (hw->channel_context[vtpci_logic_qidx].valid == 0) {
+ PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx);
+ return -EINVAL;
+ }
+ vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno;
+
+ PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d",
+ vtpci_logic_qidx, vtpci_phy_qidx, numa_node);
+
+ vq_size = ZXDH_QUEUE_DEPTH;
+
+ if (VTPCI_OPS(hw)->set_queue_num != NULL)
+ VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size);
+
+ snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx);
+
+ size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra),
+ RTE_CACHE_LINE_SIZE);
+ if (queue_type == VTNET_TQ) {
+ /*
+ * For each xmit packet, allocate a zxdh_net_hdr
+ * and indirect ring elements
+ */
+ sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region);
+ }
+
+ vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return -ENOMEM;
+ }
+ hw->vqs[vtpci_logic_qidx] = vq;
+
+ vq->hw = hw;
+ vq->vq_queue_index = vtpci_phy_qidx;
+ vq->vq_nentries = vq_size;
+
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ if (queue_type == VTNET_RQ)
+ vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ ZXDH_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(vq_name);
+ if (mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+
+ memset(mz->addr, 0, mz->len);
+
+ vq->vq_ring_mem = mz->iova;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ zxdh_init_vring(vq);
+
+ if (sz_hdr_mz) {
+ snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+ dev->data->port_id, vtpci_phy_qidx);
+ hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ RTE_CACHE_LINE_SIZE);
+ if (hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ hdr_mz = rte_memzone_lookup(vq_hdr_name);
+ if (hdr_mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+ }
+
+ if (queue_type == VTNET_RQ) {
+ size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+
+ vq->sw_ring = sw_ring;
+ rxvq = &vq->rxq;
+ rxvq->vq = vq;
+ rxvq->port_id = dev->data->port_id;
+ rxvq->mz = mz;
+ } else {
+ txvq = &vq->txq;
+ txvq->vq = vq;
+ txvq->port_id = dev->data->port_id;
+ txvq->mz = mz;
+ txvq->virtio_net_hdr_mz = hdr_mz;
+ txvq->virtio_net_hdr_mem = hdr_mz->iova;
+ }
+
+ vq->offset = offsetof(struct rte_mbuf, buf_iova);
+ if (queue_type == VTNET_TQ) {
+ struct zxdh_tx_region *txr = hdr_mz->addr;
+ uint32_t i;
+
+ memset(txr, 0, vq_size * sizeof(*txr));
+ for (i = 0; i < vq_size; i++) {
+ /* first indirect descriptor is always the tx header */
+ struct vring_packed_desc *start_dp = txr[i].tx_packed_indir;
+
+ vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir));
+ start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) +
+ offsetof(struct zxdh_tx_region, tx_hdr);
+ /* length will be updated to actual pi hdr size when xmit pkt */
+ start_dp->len = 0;
+ }
+ }
+ if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ return -EINVAL;
+ }
+ return 0;
+fail_q_alloc:
+ rte_free(sw_ring);
+ rte_memzone_free(hdr_mz);
+ rte_memzone_free(mz);
+ rte_free(vq);
+ return ret;
+}
+
+static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq)
+{
+ uint16_t lch;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vqs");
+ return -ENOMEM;
+ }
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (zxdh_acquire_channel(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire the channels");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ if (zxdh_init_queue(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to alloc virtio queue");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+static int32_t zxdh_dev_configure(struct rte_eth_dev *dev)
+{
+ const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint32_t nr_vq = 0;
+ int32_t ret = 0;
+
+ if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+ if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues,
+ ZXDH_QUEUES_NUM_MAX);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+
+ ret = zxdh_features_update(hw, rxmode, txmode);
+ if (ret < 0)
+ return ret;
+
+ /* check if lsc interrupt feature is enabled */
+ if (dev->data->dev_conf.intr_conf.lsc) {
+ if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+ PMD_DRV_LOG(ERR, "link status not supported by host");
+ return -ENOTSUP;
+ }
+ }
+
+ hw->has_tx_offload = tx_offload_enabled(hw);
+ hw->has_rx_offload = rx_offload_enabled(hw);
+
+ nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues;
+ if (nr_vq == hw->queue_num)
+ return 0;
+
+ PMD_DRV_LOG(DEBUG, "queue changed need reset ");
+ /* Reset the device although not necessary at startup */
+ zxdh_vtpci_reset(hw);
+
+ /* Tell the host we've noticed this device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK);
+
+ /* Tell the host we've known how to drive the device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER);
+ /* The queue needs to be released when reconfiguring*/
+ if (hw->vqs != NULL) {
+ zxdh_dev_free_mbufs(dev);
+ zxdh_free_queues(dev);
+ }
+
+ hw->queue_num = nr_vq;
+ ret = zxdh_alloc_queues(dev, nr_vq);
+ if (ret < 0)
+ return ret;
+
+ zxdh_datach_set(dev);
+
+ if (zxdh_configure_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to configure interrupt");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+
+ zxdh_vtpci_reinit_complete(hw);
+
+ return ret;
+}
+
/* dev_ops for zxdh, bare necessities for basic operation */
static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_configure = zxdh_dev_configure,
.dev_infos_get = zxdh_dev_infos_get,
};
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 7c5f5940cb..d547785e2a 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -14,10 +14,8 @@ extern "C" {
#include <rte_interrupts.h>
#include <eal_interrupts.h>
-#include "zxdh_queue.h"
-
/* ZXDH PCI vendor/device ID. */
-#define PCI_VENDOR_ID_ZTE 0x1cf2
+#define PCI_VENDOR_ID_ZTE 0x1cf2
#define ZXDH_E310_PF_DEVICEID 0x8061
#define ZXDH_E310_VF_DEVICEID 0x8062
@@ -30,9 +28,16 @@ extern "C" {
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+#define ZXDH_QUEUE_DEPTH 1024
+#define ZXDH_QUEUES_BASE 0
+#define ZXDH_TOTAL_QUEUES_NUM 4096
+#define ZXDH_QUEUES_NUM_MAX 256
+#define ZXDH_QUERES_SHARE_BASE (0x5000)
#define ZXDH_NUM_BARS 2
+#define ZXDH_MBUF_BURST_SZ 64
+
union VPORT {
uint16_t vport;
struct {
@@ -44,6 +49,11 @@ union VPORT {
};
};
+struct chnl_context {
+ uint16_t valid;
+ uint16_t ph_chno;
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
@@ -51,6 +61,7 @@ struct zxdh_hw {
struct rte_intr_handle *risc_intr;
struct rte_intr_handle *dtb_intr;
struct virtqueue **vqs;
+ struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX];
union VPORT vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -64,6 +75,7 @@ struct zxdh_hw {
uint16_t device_id;
uint16_t port_id;
uint16_t vfid;
+ uint16_t queue_num;
uint8_t *isr;
uint8_t weak_barriers;
@@ -75,6 +87,8 @@ struct zxdh_hw {
uint8_t phyport;
uint8_t panel_id;
uint8_t msg_chan_init;
+ uint8_t has_tx_offload;
+ uint8_t has_rx_offload;
};
uint16_t vport_to_vfid(union VPORT v);
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index c63b7eee44..b174e35325 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -115,6 +115,86 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
return rte_read8(hw->isr);
}
+static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ rte_write16(vq_size, &hw->common_cfg->queue_size);
+}
+
+static int32_t check_vq_phys_addr_ok(struct virtqueue *vq)
+{
+ if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) {
+ PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+ return 0;
+ }
+ return 1;
+}
+
+static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+ rte_write32(val & ((1ULL << 32) - 1), lo);
+ rte_write32(val >> 32, hi);
+}
+
+static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ uint64_t desc_addr = 0;
+ uint64_t avail_addr = 0;
+ uint64_t used_addr = 0;
+ uint16_t notify_off = 0;
+
+ if (!check_vq_phys_addr_ok(vq))
+ return -1;
+
+ desc_addr = vq->vq_ring_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ if (vtpci_packed_queue(vq->hw)) {
+ used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)),
+ ZXDH_PCI_VRING_ALIGN);
+ } else {
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN);
+ }
+
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */
+ notify_off = 0;
+ vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+ notify_off * hw->notify_off_multiplier);
+
+ rte_write16(1, &hw->common_cfg->queue_enable);
+
+ return 0;
+}
+
+static void zxdh_del_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -125,6 +205,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_queue_irq = zxdh_set_queue_irq,
.set_config_irq = zxdh_set_config_irq,
.get_isr = zxdh_get_isr,
+ .get_queue_num = zxdh_get_queue_num,
+ .set_queue_num = zxdh_set_queue_num,
+ .setup_queue = zxdh_setup_queue,
+ .del_queue = zxdh_del_queue,
};
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
@@ -151,6 +235,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw)
PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
}
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw)
+{
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK);
+}
+
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ if (status != ZXDH_CONFIG_STATUS_RESET)
+ status |= VTPCI_OPS(hw)->get_status(hw);
+
+ VTPCI_OPS(hw)->set_status(hw, status);
+}
+
static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
{
uint8_t bar = cap->bar;
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index 677dadd5c8..bb6714ecc0 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -12,7 +12,9 @@ extern "C" {
#include <stdint.h>
#include <stdbool.h>
+#include <rte_pci.h>
#include <bus_pci_driver.h>
+#include <ethdev_driver.h>
#include "zxdh_ethdev.h"
@@ -34,8 +36,21 @@ enum zxdh_msix_status {
#define PCI_CAP_ID_MSIX 0x11
#define PCI_MSIX_ENABLE 0x8000
+#define ZXDH_PCI_VRING_ALIGN 4096
+
+#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
+#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
+#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */
#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
+#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
+#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */
+
+#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */
+#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */
+#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */
#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
@@ -60,6 +75,8 @@ enum zxdh_msix_status {
#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define ZXDH_CONFIG_STATUS_FAILED 0x80
+#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12
+
struct zxdh_net_config {
/* The config defining mac address (if ZXDH_NET_F_MAC) */
uint8_t mac[RTE_ETHER_ADDR_LEN];
@@ -122,6 +139,12 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_F_RING_PACKED);
+}
+
+
struct zxdh_pci_ops {
void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
@@ -134,6 +157,11 @@ struct zxdh_pci_ops {
uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
uint8_t (*get_isr)(struct zxdh_hw *hw);
+ uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id);
+ void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size);
+
+ int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
+ void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
};
struct zxdh_hw_internal {
@@ -156,6 +184,9 @@ uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw);
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c
new file mode 100644
index 0000000000..005b2a5578
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.c
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "zxdh_queue.h"
+#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+#include "zxdh_common.h"
+#include "zxdh_msg.h"
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq)
+{
+ struct rte_mbuf *cookie = NULL;
+ int32_t idx = 0;
+
+ if (vq == NULL)
+ return NULL;
+
+ for (idx = 0; idx < vq->vq_nentries; idx++) {
+ cookie = vq->vq_descx[idx].cookie;
+ if (cookie != NULL) {
+ vq->vq_descx[idx].cookie = NULL;
+ return cookie;
+ }
+ }
+ return NULL;
+}
+
+static int32_t zxdh_release_channel(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t var = 0;
+ uint32_t addr = 0;
+ uint32_t widx = 0;
+ uint32_t bidx = 0;
+ uint16_t pch = 0;
+ uint16_t lch = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ if (zxdh_acquire_lock(hw) != 0) {
+ PMD_INIT_LOG(ERR,
+ "Could not acquire lock to release channel, timeout %d", timeout);
+ continue;
+ }
+ break;
+ }
+
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Acquire lock timeout");
+ return -1;
+ }
+
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (hw->channel_context[lch].valid == 0) {
+ PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
+ continue;
+ }
+
+ pch = hw->channel_context[lch].ph_chno;
+ widx = pch / 32;
+ bidx = pch % 32;
+
+ addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
+ var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+ var &= ~(1 << bidx);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+
+ hw->channel_context[lch].valid = 0;
+ hw->channel_context[lch].ph_chno = 0;
+ }
+
+ zxdh_release_lock(hw);
+
+ return 0;
+}
+
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx)
+{
+ if (vtpci_queue_idx % 2 == 0)
+ return VTNET_RQ;
+ else
+ return VTNET_TQ;
+}
+
+int32_t zxdh_free_queues(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ struct virtqueue *vq = NULL;
+ int32_t queue_type = 0;
+ uint16_t i = 0;
+
+ if (hw->vqs == NULL)
+ return 0;
+
+ if (zxdh_release_channel(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to clear coi table");
+ return -1;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (vq == NULL)
+ continue;
+
+ VTPCI_OPS(hw)->del_queue(hw, vq);
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == VTNET_RQ) {
+ rte_free(vq->sw_ring);
+ rte_memzone_free(vq->rxq.mz);
+ } else if (queue_type == VTNET_TQ) {
+ rte_memzone_free(vq->txq.mz);
+ rte_memzone_free(vq->txq.virtio_net_hdr_mz);
+ }
+
+ rte_free(vq);
+ hw->vqs[i] = NULL;
+ PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
+ }
+
+ rte_free(hw->vqs);
+ hw->vqs = NULL;
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index 336c0701f4..e0fa5502e8 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -15,6 +15,25 @@ extern "C" {
#include "zxdh_ethdev.h"
#include "zxdh_rxtx.h"
+#include "zxdh_pci.h"
+
+enum { VTNET_RQ = 0, VTNET_TQ = 1 };
+
+#define VIRTQUEUE_MAX_NAME_SZ 32
+#define RQ_QUEUE_IDX 0
+#define TQ_QUEUE_IDX 1
+#define ZXDH_MAX_TX_INDIRECT 8
+
+/* This marks a buffer as write-only (otherwise read-only). */
+#define VRING_DESC_F_WRITE 2
+/* This flag means the descriptor was made available by the driver */
+#define VRING_PACKED_DESC_F_AVAIL (1 << (7))
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+
+#define VQ_RING_DESC_CHAIN_END 32768
/** ring descriptors: 16 bytes.
* These can chain together via "next".
@@ -26,6 +45,19 @@ struct vring_desc {
uint16_t next; /* We chain unused descriptors via this. */
};
+struct vring_used_elem {
+ /* Index of start of used descriptor chain. */
+ uint32_t id;
+ /* Total length of the descriptor chain which was written to. */
+ uint32_t len;
+};
+
+struct vring_used {
+ uint16_t flags;
+ uint16_t idx;
+ struct vring_used_elem ring[0];
+};
+
struct vring_avail {
uint16_t flags;
uint16_t idx;
@@ -102,4 +134,148 @@ struct virtqueue {
struct vq_desc_extra vq_descx[0];
};
+struct zxdh_type_hdr {
+ uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */
+ uint8_t pd_len;
+ uint8_t num_buffers;
+ uint8_t reserved;
+} __rte_packed; /* 4B */
+
+struct zxdh_pi_hdr {
+ uint8_t pi_len;
+ uint8_t pkt_type;
+ uint16_t vlan_id;
+ uint32_t ipv6_extend;
+ uint16_t l3_offset;
+ uint16_t l4_offset;
+ uint8_t phy_port;
+ uint8_t pkt_flag_hi8;
+ uint16_t pkt_flag_lw16;
+ union {
+ struct {
+ uint64_t sa_idx;
+ uint8_t reserved_8[8];
+ } dl;
+ struct {
+ uint32_t lro_flag;
+ uint32_t lro_mss;
+ uint16_t err_code;
+ uint16_t pm_id;
+ uint16_t pkt_len;
+ uint8_t reserved[2];
+ } ul;
+ };
+} __rte_packed; /* 32B */
+
+struct zxdh_pd_hdr_dl {
+ uint32_t ol_flag;
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t dst_vfid;
+ uint32_t svlan_insert;
+ uint32_t cvlan_insert;
+} __rte_packed; /* 16B */
+
+struct zxdh_net_hdr_dl {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_dl pd_hdr; /* 16B */
+} __rte_packed;
+
+struct zxdh_pd_hdr_ul {
+ uint32_t pkt_flag;
+ uint32_t rss_hash;
+ uint32_t fd;
+ uint32_t striped_vlan_tci;
+ /* ovs */
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t src_vfid;
+ /* */
+ uint16_t pkt_type_out;
+ uint16_t pkt_type_in;
+} __rte_packed; /* 24B */
+
+struct zxdh_net_hdr_ul {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_ul pd_hdr; /* 24B */
+} __rte_packed; /* 60B */
+
+struct zxdh_tx_region {
+ struct zxdh_net_hdr_dl tx_hdr;
+ union {
+ struct vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT];
+ struct vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT];
+ } __rte_packed;
+};
+
+static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align)
+{
+ size_t size;
+
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
+ size = num * sizeof(struct vring_desc);
+ size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_used) + (num * sizeof(struct vring_used_elem));
+ return size;
+}
+
+static inline void vring_init_packed(struct vring_packed *vr, uint8_t *p,
+ unsigned long align, uint32_t num)
+{
+ vr->num = num;
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->driver = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device = (struct vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct vring_packed_desc_event)), align);
+}
+
+static inline void vring_desc_init_packed(struct virtqueue *vq, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
+static inline void vring_desc_init_indirect_packed(struct vring_packed_desc *dp, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n; i++) {
+ dp[i].id = (uint16_t)i;
+ dp[i].flags = VRING_DESC_F_WRITE;
+ }
+}
+
+static inline void virtqueue_disable_intr(struct virtqueue *vq)
+{
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow;
+ }
+}
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq);
+int32_t zxdh_free_queues(struct rte_eth_dev *dev);
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx);
+
+#ifdef __cplusplus
+}
+#endif
+
#endif /* _ZXDH_QUEUE_H_ */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
index 7314f76d2c..6476bc15e2 100644
--- a/drivers/net/zxdh/zxdh_rxtx.h
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -48,4 +48,8 @@ struct virtnet_tx {
const struct rte_memzone *mz; /* mem zone to populate TX ring. */
};
+#ifdef __cplusplus
+}
+#endif
+
#endif /* _ZXDH_RXTX_H_ */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 92447 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v6,9/9] net/zxdh: add zxdh dev configure ops
2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
@ 2024-10-18 5:18 ` Junlong Wang
2024-10-18 6:48 ` David Marchand
2024-10-19 11:17 ` Junlong Wang
1 sibling, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-18 5:18 UTC (permalink / raw)
To: dev; +Cc: stephen, ferruh.yigit, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 348 bytes --]
Hi, Maintainer
iol-unit-amd64-testing/ Debian 12 | FAIL
DPDK:fast-tests / bitops_autotest FAIL 1.72 s (exit status 255 or signal 127 SIGinvalid)
Do we need to solve this error? We haven't found the detailed reason for the error in the output log. I noticed that the patches submitted by others also have this error.
Thanks.
[-- Attachment #1.1.2: Type: text/html , Size: 700 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v6,9/9] net/zxdh: add zxdh dev configure ops
2024-10-18 5:18 ` [v6,9/9] " Junlong Wang
@ 2024-10-18 6:48 ` David Marchand
0 siblings, 0 replies; 76+ messages in thread
From: David Marchand @ 2024-10-18 6:48 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19
Hello Junlong,
On Fri, Oct 18, 2024 at 7:21 AM Junlong Wang <wang.junlong1@zte.com.cn> wrote:
>
> Hi, Maintainer
>
> iol-unit-amd64-testing/ Debian 12 | FAIL
> DPDK:fast-tests / bitops_autotest FAIL 1.72 s (exit status 255 or signal 127 SIGinvalid)
>
> Do we need to solve this error? We haven't found the detailed reason for the error in the output log. I noticed that the patches submitted by others also have this error.
It was a false positive when testing your series (on top of next-net
repo at the time) and you can ignore this failure.
This is fixed in the main repo with: 899cffe059b8 ("test/bitops: fix
32-bit atomic test spurious failure").
next-net was rebased recently and contains the fix.
--
David Marchand
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v6,9/9] net/zxdh: add zxdh dev configure ops
2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-18 5:18 ` [v6,9/9] " Junlong Wang
@ 2024-10-19 11:17 ` Junlong Wang
1 sibling, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-19 11:17 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stephen, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 155 bytes --]
Hi, Maintainer
Hope you can take some time to check if there are any modifications needed for the net/zxdh driver.
Thank you very much!
[-- Attachment #1.1.2: Type: text/html , Size: 367 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v6 5/9] net/zxdh: add msg chan enable implementation
2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-10-21 8:50 ` Thomas Monjalon
2024-10-21 10:56 ` Junlong Wang
1 sibling, 0 replies; 76+ messages in thread
From: Thomas Monjalon @ 2024-10-21 8:50 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19
16/10/2024 10:18, Junlong Wang:
> Add msg chan enable implementation to support
> send msg to get infos.
Would be interesting to explain which module is receiving the message.
> +union VPORT {
Why uppercase? for differenciate with vport below?
In general we tend to use only lowercase.
> + uint16_t vport;
> + struct {
> + uint16_t vfid:8;
> + uint16_t pfid:3;
> + uint16_t vf_flag:1;
> + uint16_t epid:3;
> + uint16_t direct_flag:1;
> + };
> +};
I am curious about the spinlock function below.
What is it doing exactly?
> +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
> + uint64_t label_addr, uint16_t primary_id)
> +{
> + uint32_t lock_rd_cnt = 0;
> +
> + do {
> + /* read to lock */
> + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id);
typo, it should be spinlock_read
> +
> + if (spl_val == 0) {
> + label_write((uint64_t)label_addr, virt_lock_id, primary_id);
> + break;
> + }
> + rte_delay_us_block(SPINLOCK_POLLING_SPAN_US);
> + lock_rd_cnt++;
> + } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES);
> + if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES)
> + return -1;
> +
> + return 0;
> +}
[...]
> +pthread_spinlock_t chan_lock;
> +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
> +{
> + int ret = 0;
> + uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
> + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
> +
> + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) {
> + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.");
> + return BAR_MSG_ERR_TYPE;
> + }
> + uint16_t idx = lock_type_tbl[src_index][dst_index];
> +
> + if (idx == LOCK_TYPE_SOFT)
> + pthread_spin_lock(&chan_lock);
In general we avoid the pthread.h functions.
Do you know we have rte_spinlock_lock()?
> + else
> + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr);
> +
> + if (ret != 0)
> + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid);
> +
> + return ret;
> +}
[...]
> --- a/drivers/net/zxdh/zxdh_msg.h
> +++ b/drivers/net/zxdh/zxdh_msg.h
> @@ -13,6 +13,19 @@ extern "C" {
>
> #include <ethdev_driver.h>
>
> +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
> +
You should continue using the prefix ZXDH_
These definitions below have a high risk of namespace conflict in future.
Using a namespace prefix is a good practice.
> +#define BAR_MSG_POLLING_SPAN 100
> +#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN)
> +#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
> +#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN)
> +
> +#define BAR_CHAN_MSG_SYNC 0
> +
> +#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
> +#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header))
> +#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL -
> sizeof(struct bar_msg_header)) +
>
> enum DRIVER_TYPE {
>
> MSG_CHAN_END_MPF = 0,
> MSG_CHAN_END_PF,
>
> @@ -20,6 +33,13 @@ enum DRIVER_TYPE {
>
> MSG_CHAN_END_RISC,
>
> };
>
> +enum MSG_VEC {
> + MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
> + MSIX_FROM_MPF,
> + MSIX_FROM_RISCV,
> + MSG_VEC_NUM,
> +};
> +
>
> enum BAR_MSG_RTN {
>
> BAR_MSG_OK = 0,
> BAR_MSG_ERR_MSGID,
>
> @@ -54,10 +74,117 @@ enum BAR_MSG_RTN {
>
> BAR_MSG_ERR_SOCKET, /* netlink sockte err */
>
> };
>
> +enum bar_module_id {
> + BAR_MODULE_DBG = 0, /* 0: debug */
> + BAR_MODULE_TBL, /* 1: resource table */
> + BAR_MODULE_MISX, /* 2: config msix */
> + BAR_MODULE_SDA, /* 3: */
> + BAR_MODULE_RDMA, /* 4: */
> + BAR_MODULE_DEMO, /* 5: channel test */
> + BAR_MODULE_SMMU, /* 6: */
> + BAR_MODULE_MAC, /* 7: mac rx/tx stats */
> + BAR_MODULE_VDPA, /* 8: vdpa live migration */
> + BAR_MODULE_VQM, /* 9: vqm live migration */
> + BAR_MODULE_NP, /* 10: vf msg callback np */
> + BAR_MODULE_VPORT, /* 11: get vport */
> + BAR_MODULE_BDF, /* 12: get bdf */
> + BAR_MODULE_RISC_READY, /* 13: */
> + BAR_MODULE_REVERSE, /* 14: byte stream reverse */
> + BAR_MDOULE_NVME, /* 15: */
> + BAR_MDOULE_NPSDK, /* 16: */
> + BAR_MODULE_NP_TODO, /* 17: */
> + MODULE_BAR_MSG_TO_PF, /* 18: */
> + MODULE_BAR_MSG_TO_VF, /* 19: */
> +
> + MODULE_FLASH = 32,
> + BAR_MODULE_OFFSET_GET = 33,
> + BAR_EVENT_OVS_WITH_VCB = 36,
> +
> + BAR_MSG_MODULE_NUM = 100,
> +};
> +
> +struct msix_para {
> + uint16_t pcie_id;
> + uint16_t vector_risc;
> + uint16_t vector_pfvf;
> + uint16_t vector_mpf;
> + uint64_t virt_addr;
> + uint16_t driver_type; /* refer to DRIVER_TYPE */
> +};
> +
> +struct msix_msg {
> + uint16_t pcie_id;
> + uint16_t vector_risc;
> + uint16_t vector_pfvf;
> + uint16_t vector_mpf;
> +};
> +
> +struct zxdh_pci_bar_msg {
> + uint64_t virt_addr; /* bar addr */
> + void *payload_addr;
> + uint16_t payload_len;
> + uint16_t emec;
> + uint16_t src; /* refer to BAR_DRIVER_TYPE */
> + uint16_t dst; /* refer to BAR_DRIVER_TYPE */
> + uint16_t module_id;
> + uint16_t src_pcieid;
> + uint16_t dst_pcieid;
> + uint16_t usr;
> +};
> +
> +struct bar_msix_reps {
> + uint16_t pcie_id;
> + uint16_t check;
> + uint16_t vport;
> + uint16_t rsv;
> +} __rte_packed;
> +
> +struct bar_offset_reps {
> + uint16_t check;
> + uint16_t rsv;
> + uint32_t offset;
> + uint32_t length;
> +} __rte_packed;
> +
> +struct bar_recv_msg {
> + uint8_t reps_ok;
> + uint16_t reps_len;
> + uint8_t rsv;
> + /* */
> + union {
> + struct bar_msix_reps msix_reps;
> + struct bar_offset_reps offset_reps;
> + } __rte_packed;
> +} __rte_packed;
> +
> +struct zxdh_msg_recviver_mem {
> + void *recv_buffer; /* first 4B is head, followed by payload */
> + uint64_t buffer_len;
> +};
> +
> +struct bar_msg_header {
> + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
> + uint8_t sync : 1;
> + uint8_t emec : 1; /* emergency? */
> + uint8_t ack : 1; /* ack msg? */
> + uint8_t poll : 1;
> + uint8_t usr : 1;
> + uint8_t rsv;
> + uint16_t module_id;
> + uint16_t len;
> + uint16_t msg_id;
> + uint16_t src_pcieid;
> + uint16_t dst_pcieid; /* used in PF-->VF */
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos
2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
@ 2024-10-21 8:52 ` Thomas Monjalon
0 siblings, 0 replies; 76+ messages in thread
From: Thomas Monjalon @ 2024-10-21 8:52 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19
16/10/2024 10:18, Junlong Wang:
> --- a/drivers/net/zxdh/zxdh_msg.c
> +++ b/drivers/net/zxdh/zxdh_msg.c
> @@ -18,8 +18,6 @@
> #define REPS_INFO_FLAG_USABLE 0x00
> #define BAR_SEQID_NUM_MAX 256
>
> -#define ZXDH_BAR0_INDEX 0
> -
> #define PCIEID_IS_PF_MASK (0x0800)
> #define PCIEID_PF_IDX_MASK (0x0700)
> #define PCIEID_VF_IDX_MASK (0x00ff)
> @@ -43,7 +41,6 @@
> #define FW_SHRD_OFFSET (0x5000)
> #define FW_SHRD_INNER_HW_LABEL_PAT (0x800)
> #define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT)
> -#define ZXDH_CTRLCH_OFFSET (0x2000)
> #define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET)
> #define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET)
> #define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET)
Removing code is strange here.
Maybe it should have been placed somewhere else initially?
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops
2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
@ 2024-10-21 8:54 ` Thomas Monjalon
0 siblings, 0 replies; 76+ messages in thread
From: Thomas Monjalon @ 2024-10-21 8:54 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19
16/10/2024 10:18, Junlong Wang:
> +static uint32_t zxdh_dev_speed_capa_get(uint32_t speed)
> +{
> + switch (speed) {
> + case RTE_ETH_SPEED_NUM_10G: return RTE_ETH_LINK_SPEED_10G;
> + case RTE_ETH_SPEED_NUM_20G: return RTE_ETH_LINK_SPEED_20G;
> + case RTE_ETH_SPEED_NUM_25G: return RTE_ETH_LINK_SPEED_25G;
> + case RTE_ETH_SPEED_NUM_40G: return RTE_ETH_LINK_SPEED_40G;
> + case RTE_ETH_SPEED_NUM_50G: return RTE_ETH_LINK_SPEED_50G;
> + case RTE_ETH_SPEED_NUM_56G: return RTE_ETH_LINK_SPEED_56G;
> + case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G;
> + case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G;
> + default: return 0;
> + }
> +}
You could use rte_eth_speed_bitflag() instead.
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver
2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-10-21 9:03 ` Thomas Monjalon
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2 siblings, 0 replies; 76+ messages in thread
From: Thomas Monjalon @ 2024-10-21 9:03 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19
16/10/2024 10:16, Junlong Wang:
> Add basic zxdh ethdev init and register PCI probe functions
> Update doc files
>
> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
> ---
> doc/guides/nics/features/zxdh.ini | 9 +++
> doc/guides/nics/index.rst | 1 +
> doc/guides/nics/zxdh.rst | 30 ++++++++++
> drivers/net/meson.build | 1 +
> drivers/net/zxdh/meson.build | 18 ++++++
> drivers/net/zxdh/zxdh_ethdev.c | 92 +++++++++++++++++++++++++++++++
> drivers/net/zxdh/zxdh_ethdev.h | 44 +++++++++++++++
> 7 files changed, 195 insertions(+)
Release notes are missing.
[...]
> +++ b/doc/guides/nics/zxdh.rst
> @@ -0,0 +1,30 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
A single space is enough here.
> + Copyright(c) 2024 ZTE Corporation.
> +
> +ZXDH Poll Mode Driver
> +======================
> +
> +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
> +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
> +the ZTE Ethernet Controller E310/E312.
> +
> +- Learning about ZXDH NX Series Ethernet Controller NICs using
Active form is better: "Learn about"
> + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
> +
> +Features
> +--------
> +
> +Features of the zxdh PMD are:
Do you name your driver uppercase or lowercase?
Try to be consistent.
> +
> +- Multi arch support: x86_64, ARMv8.
> +
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +for details.
> +
> +Limitations or Known issues
> +---------------------------
> +X86-32, Power8, ARMv7 and BSD are not supported yet.
Keep an empty line after a title.
What about RISC-V and Windows?
[...]
> +++ b/drivers/net/zxdh/zxdh_ethdev.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 ZTE Corporation
> + */
> +
> +#ifndef _ZXDH_ETHDEV_H_
> +#define _ZXDH_ETHDEV_H_
No need for the underscores at the beggining and end:
ZXDH_ETHDEV_H
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "ethdev_driver.h"
The includes should be before extern "C"
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v6 5/9] net/zxdh: add msg chan enable implementation
2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
2024-10-21 8:50 ` Thomas Monjalon
@ 2024-10-21 10:56 ` Junlong Wang
1 sibling, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-21 10:56 UTC (permalink / raw)
To: thomas; +Cc: dev, ferruh.yigit, stephen, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 698 bytes --]
>> Add msg chan enable implementation to support
>> send msg to get infos.
> Would be interesting to explain which module is receiving the message.
Send messages to the backend (device side) to obtain information.
> I am curious about the spinlock function below.
> What is it doing exactly?
>> +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
>> + uint64_t label_addr, uint16_t primary_id)
>> +{
>> + uint32_t lock_rd_cnt = 0;
>> +
>> + do {
>> + /* read to lock */
>> + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id);
Using locks to ensure message consistency when accessing backend information for multiple pf/vf.
[-- Attachment #1.1.2: Type: text/html , Size: 1596 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 0/9] net/zxdh: introduce net zxdh driver
2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-21 9:03 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Thomas Monjalon
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
` (8 more replies)
2 siblings, 9 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 2872 bytes --]
v7:
- add release notes and modify zxdh.rst issues.
- avoid use pthread and use rte_spinlock_lock.
- using the prefix ZXDH_ before some definitions.
- resole issues according to thomas's comments.
v6:
- Resolve ci/intel compilation issues.
- fix meson.build indentation in earlier patch.
V5:
- split driver into multiple patches,part of the zxdh driver,
later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc.
- fix errors reported by scripts.
- move the product link in zxdh.rst.
- fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64.
- modify other comments according to Ferruh's comments.
Junlong Wang (9):
net/zxdh: add zxdh ethdev pmd driver
net/zxdh: add logging implementation
net/zxdh: add zxdh device pci init implementation
net/zxdh: add msg chan and msg hwlock init
net/zxdh: add msg chan enable implementation
net/zxdh: add zxdh get device backend infos
net/zxdh: add configure zxdh intr implementation
net/zxdh: add zxdh dev infos get ops
net/zxdh: add zxdh dev configure ops
doc/guides/nics/features/zxdh.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 31 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 22 +
drivers/net/zxdh/zxdh_common.c | 367 +++++++++
drivers/net/zxdh/zxdh_common.h | 42 +
drivers/net/zxdh/zxdh_ethdev.c | 1006 ++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 100 +++
drivers/net/zxdh/zxdh_logs.h | 40 +
drivers/net/zxdh/zxdh_msg.c | 982 +++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 228 ++++++
drivers/net/zxdh/zxdh_pci.c | 449 +++++++++++
drivers/net/zxdh/zxdh_pci.h | 192 +++++
drivers/net/zxdh/zxdh_queue.c | 131 +++
drivers/net/zxdh/zxdh_queue.h | 282 +++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 ++
18 files changed, 3942 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
create mode 100644 drivers/net/zxdh/zxdh_logs.h
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.c
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 5679 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-22 12:20 ` [PATCH v7 2/9] net/zxdh: add logging implementation Junlong Wang
` (7 subsequent siblings)
8 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 8218 bytes --]
Add basic zxdh ethdev init and register PCI probe functions
Update doc files.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
doc/guides/nics/features/zxdh.ini | 9 +++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 31 +++++++++
doc/guides/rel_notes/release_24_11.rst | 4 ++
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 18 +++++
drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++
8 files changed, 200 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
new file mode 100644
index 0000000000..05c8091ed7
--- /dev/null
+++ b/doc/guides/nics/features/zxdh.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'zxdh' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..8e371ac4a5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -69,3 +69,4 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ zxdh
diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
new file mode 100644
index 0000000000..920ff5175e
--- /dev/null
+++ b/doc/guides/nics/zxdh.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 ZTE Corporation.
+
+ZXDH Poll Mode Driver
+======================
+
+The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
+for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
+the ZTE Ethernet Controller E310/E312.
+
+- Learn about ZXDH NX Series Ethernet Controller NICs using
+ `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
+
+Features
+--------
+
+Features of the ZXDH PMD are:
+
+- Multi arch support: x86_64, ARMv8.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+
+X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet.
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index fa4822d928..d1edcfebee 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -161,6 +161,10 @@ New Features
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
+* **Updated ZTE zxdh net driver.**
+
+ * Added ethdev driver support for zxdh NX Series Ethernet Controller.
+
* **Added cryptodev queue pair reset support.**
A new API ``rte_cryptodev_queue_pair_reset`` is added
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b782..0a12914534 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -62,6 +62,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
new file mode 100644
index 0000000000..932fb1c835
--- /dev/null
+++ b/drivers/net/zxdh/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 ZTE Corporation
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+sources = files(
+ 'zxdh_ethdev.c',
+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
new file mode 100644
index 0000000000..75d8b28cc3
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <ethdev_pci.h>
+#include <bus_pci_driver.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ int ret = 0;
+
+ eth_dev->dev_ops = NULL;
+
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL)
+ return -ENOMEM;
+
+ memset(hw, 0, sizeof(*hw));
+ hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
+ if (hw->bar_addr[0] == 0)
+ return -EIO;
+
+ hw->device_id = pci_dev->id.device_id;
+ hw->port_id = eth_dev->data->port_id;
+ hw->eth_dev = eth_dev;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ hw->is_pf = 0;
+
+ if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID ||
+ pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) {
+ hw->is_pf = 1;
+ }
+
+ return ret;
+}
+
+static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct zxdh_hw),
+ zxdh_eth_dev_init);
+}
+
+static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+ int ret = 0;
+
+ return ret;
+}
+
+static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ int ret = 0;
+
+ ret = zxdh_dev_close(eth_dev);
+
+ return ret;
+}
+
+static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
+
+ return ret;
+}
+
+static const struct rte_pci_id pci_id_zxdh_map[] = {
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+static struct rte_pci_driver zxdh_pmd = {
+ .id_table = pci_id_zxdh_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = zxdh_eth_pci_probe,
+ .remove = zxdh_eth_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
new file mode 100644
index 0000000000..086f3a0cdc
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_ETHDEV_H
+#define ZXDH_ETHDEV_H
+
+#include "ethdev_driver.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* ZXDH PCI vendor/device ID. */
+#define PCI_VENDOR_ID_ZTE 0x1cf2
+
+#define ZXDH_E310_PF_DEVICEID 0x8061
+#define ZXDH_E310_VF_DEVICEID 0x8062
+#define ZXDH_E312_PF_DEVICEID 0x8049
+#define ZXDH_E312_VF_DEVICEID 0x8060
+
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+#define ZXDH_NUM_BARS 2
+
+struct zxdh_hw {
+ struct rte_eth_dev *eth_dev;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+
+ uint32_t speed;
+ uint16_t device_id;
+ uint16_t port_id;
+
+ uint8_t duplex;
+ uint8_t is_pf;
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_ETHDEV_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 15499 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 2/9] net/zxdh: add logging implementation
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
` (6 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3417 bytes --]
Add zxdh logging implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++--
drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++
2 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_logs.h
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 75d8b28cc3..7220770c01 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -7,6 +7,7 @@
#include <rte_ethdev.h>
#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL)
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
return -ENOMEM;
+ }
memset(hw, 0, sizeof(*hw));
hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
- if (hw->bar_addr[0] == 0)
+ if (hw->bar_addr[0] == 0) {
+ PMD_INIT_LOG(ERR, "Bad mem resource.");
return -EIO;
+ }
hw->device_id = pci_dev->id.device_id;
hw->port_id = eth_dev->data->port_id;
@@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = {
RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE);
diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h
new file mode 100644
index 0000000000..a8a6a3135b
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_logs.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_LOGS_H
+#define ZXDH_LOGS_H
+
+#include <rte_log.h>
+
+extern int zxdh_logtype_init;
+#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init
+#define PMD_INIT_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_driver;
+#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_rx;
+#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx
+#define PMD_RX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_tx;
+#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx
+#define PMD_TX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_msg;
+#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg
+#define PMD_MSG_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+#endif /* ZXDH_LOGS_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6146 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-22 12:20 ` [PATCH v7 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-27 16:47 ` Stephen Hemminger
2024-10-27 16:47 ` Stephen Hemminger
2024-10-22 12:20 ` [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
` (5 subsequent siblings)
8 siblings, 2 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 22886 bytes --]
Add device pci init implementation,
to obtain PCI capability and read configuration, etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 43 +++++
drivers/net/zxdh/zxdh_ethdev.h | 22 ++-
drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++
drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++
7 files changed, 660 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 932fb1c835..7db4e7bc71 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -15,4 +15,5 @@ endif
sources = files(
'zxdh_ethdev.c',
+ 'zxdh_pci.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 7220770c01..f34b2af7b3 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -8,6 +8,40 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+
+struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+
+static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
+{
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = 0;
+
+ ret = zxdh_read_pci_caps(pci_dev, hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id);
+ goto err;
+ }
+
+ zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_dev_pci_ops;
+ zxdh_vtpci_reset(hw);
+ zxdh_get_pci_dev_config(hw);
+
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]);
+
+ /* If host does not support both status and MSI-X then disable LSC */
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE)
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+ else
+ eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+
+ return 0;
+
+err:
+ PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
+ return ret;
+}
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
hw->is_pf = 1;
}
+ ret = zxdh_init_device(eth_dev);
+ if (ret < 0)
+ goto err_zxdh_init;
+
+ return ret;
+
+err_zxdh_init:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
return ret;
}
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 086f3a0cdc..e2db2508e2 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -5,6 +5,8 @@
#ifndef ZXDH_ETHDEV_H
#define ZXDH_ETHDEV_H
+#include <rte_ether.h>
+
#include "ethdev_driver.h"
#ifdef __cplusplus
@@ -23,16 +25,30 @@ extern "C" {
#define ZXDH_MAX_MC_MAC_ADDRS 32
#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
-#define ZXDH_NUM_BARS 2
+#define ZXDH_NUM_BARS 2
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
- uint64_t bar_addr[ZXDH_NUM_BARS];
+ struct zxdh_pci_common_cfg *common_cfg;
+ struct zxdh_net_config *dev_cfg;
- uint32_t speed;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+ uint64_t host_features;
+ uint64_t guest_features;
+ uint32_t max_queue_pairs;
+ uint32_t speed;
+ uint32_t notify_off_multiplier;
+ uint16_t *notify_base;
+ uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint8_t *isr;
+ uint8_t weak_barriers;
+ uint8_t use_msix;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
};
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
new file mode 100644
index 0000000000..b0c3d0e0be
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -0,0 +1,290 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <unistd.h>
+
+#ifdef RTE_EXEC_ENV_LINUX
+ #include <dirent.h>
+ #include <fcntl.h>
+#endif
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+
+#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA | \
+ 1ULL << ZXDH_NET_F_MAC)
+
+static void zxdh_read_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ void *dst,
+ int32_t length)
+{
+ int32_t i = 0;
+ uint8_t *p = NULL;
+ uint8_t old_gen = 0;
+ uint8_t new_gen = 0;
+
+ do {
+ old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+ p = dst;
+ for (i = 0; i < length; i++)
+ *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+ new_gen = rte_read8(&hw->common_cfg->config_generation);
+ } while (old_gen != new_gen);
+}
+
+static void zxdh_write_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ const void *src,
+ int32_t length)
+{
+ int32_t i = 0;
+ const uint8_t *p = src;
+
+ for (i = 0; i < length; i++)
+ rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint8_t zxdh_get_status(struct zxdh_hw *hw)
+{
+ return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static uint64_t zxdh_get_features(struct zxdh_hw *hw)
+{
+ uint32_t features_lo = 0;
+ uint32_t features_hi = 0;
+
+ rte_write32(0, &hw->common_cfg->device_feature_select);
+ features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+ rte_write32(1, &hw->common_cfg->device_feature_select);
+ features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+ return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
+{
+ rte_write32(0, &hw->common_cfg->guest_feature_select);
+ rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature);
+ rte_write32(1, &hw->common_cfg->guest_feature_select);
+ rte_write32(features >> 32, &hw->common_cfg->guest_feature);
+}
+
+const struct zxdh_pci_ops zxdh_dev_pci_ops = {
+ .read_dev_cfg = zxdh_read_dev_config,
+ .write_dev_cfg = zxdh_write_dev_config,
+ .get_status = zxdh_get_status,
+ .set_status = zxdh_set_status,
+ .get_features = zxdh_get_features,
+ .set_features = zxdh_set_features,
+};
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_features(hw);
+}
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw)
+{
+ PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id);
+ uint32_t retry = 0;
+
+ VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET);
+ /* Flush status write and wait device ready max 3 seconds. */
+ while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) {
+ ++retry;
+ rte_delay_ms(1);
+ }
+ PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
+}
+
+static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
+{
+ uint8_t bar = cap->bar;
+ uint32_t length = cap->length;
+ uint32_t offset = cap->offset;
+
+ if (bar >= PCI_MAX_RESOURCE) {
+ PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+ return NULL;
+ }
+ if (offset + length < offset) {
+ PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length);
+ return NULL;
+ }
+ if (offset + length > dev->mem_resource[bar].len) {
+ PMD_INIT_LOG(ERR, "invalid cap: overflows bar space");
+ return NULL;
+ }
+ uint8_t *base = dev->mem_resource[bar].addr;
+
+ if (base == NULL) {
+ PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+ return NULL;
+ }
+ return base + offset;
+}
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw)
+{
+ uint8_t pos = 0;
+ int32_t ret = 0;
+
+ if (dev->mem_resource[0].addr == NULL) {
+ PMD_INIT_LOG(ERR, "bar0 base addr is NULL");
+ return -1;
+ }
+
+ ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret);
+ return -1;
+ }
+ while (pos) {
+ struct zxdh_pci_cap cap;
+
+ ret = rte_pci_read_config(dev, &cap, 2, pos);
+ if (ret != 2) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap.cap_vndr == ZXDH_PCI_CAP_ID_MSIX) {
+ /**
+ * Transitional devices would also have this capability,
+ * that's why we also check if msix is enabled.
+ * 1st byte is cap ID; 2nd byte is the position of next cap;
+ * next two bytes are the flags.
+ */
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2);
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d",
+ pos + 2, ret);
+ break;
+ }
+ hw->use_msix = (flags & ZXDH_PCI_MSIX_ENABLE) ?
+ ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED;
+ }
+ if (cap.cap_vndr != ZXDH_PCI_CAP_ID_VNDR) {
+ PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x",
+ pos, cap.cap_vndr);
+ goto next;
+ }
+ ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+ pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+ switch (cap.cfg_type) {
+ case ZXDH_PCI_CAP_COMMON_CFG:
+ hw->common_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_NOTIFY_CFG: {
+ ret = rte_pci_read_config(dev, &hw->notify_off_multiplier,
+ 4, pos + sizeof(cap));
+ if (ret != 4)
+ PMD_INIT_LOG(ERR,
+ "failed to read notify_off_multiplier, ret %d", ret);
+ else
+ hw->notify_base = get_cfg_addr(dev, &cap);
+ break;
+ }
+ case ZXDH_PCI_CAP_DEVICE_CFG:
+ hw->dev_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_ISR_CFG:
+ hw->isr = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_PCI_CFG: {
+ hw->pcie_id = *(uint16_t *)&cap.padding[1];
+ PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id);
+ uint16_t pcie_id = hw->pcie_id;
+
+ if ((pcie_id >> 11) & 0x1) /* PF */ {
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7);
+ } else { /* VF */
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff);
+ }
+ break;
+ }
+ }
+next:
+ pos = cap.cap_next;
+ }
+ if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+ hw->dev_cfg == NULL || hw->isr == NULL) {
+ PMD_INIT_LOG(ERR, "no zxdh pci device found.");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length)
+{
+ VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
+{
+ uint64_t guest_features = 0;
+ uint64_t nego_features = 0;
+ uint32_t max_queue_pairs = 0;
+
+ hw->host_features = zxdh_vtpci_get_features(hw);
+
+ guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
+ nego_features = guest_features & hw->host_features;
+
+ hw->guest_features = nego_features;
+
+ if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
+ &hw->mac_addr, RTE_ETHER_ADDR_LEN);
+ } else {
+ rte_eth_random_addr(&hw->mac_addr[0]);
+ }
+
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
+ &max_queue_pairs, sizeof(max_queue_pairs));
+
+ if (max_queue_pairs == 0)
+ hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
+ else
+ hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
+ PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs);
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
new file mode 100644
index 0000000000..f67de40962
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_PCI_H
+#define ZXDH_PCI_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#include <bus_pci_driver.h>
+
+#include "zxdh_ethdev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum zxdh_msix_status {
+ ZXDH_MSIX_NONE = 0,
+ ZXDH_MSIX_DISABLED = 1,
+ ZXDH_MSIX_ENABLED = 2
+};
+
+#define ZXDH_PCI_CAPABILITY_LIST 0x34
+#define ZXDH_PCI_CAP_ID_VNDR 0x09
+#define ZXDH_PCI_CAP_ID_MSIX 0x11
+
+#define ZXDH_PCI_MSIX_ENABLE 0x8000
+
+#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
+#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
+#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
+#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */
+#define ZXDH_F_VERSION_1 32
+#define ZXDH_F_RING_PACKED 34
+#define ZXDH_F_IN_ORDER 35
+#define ZXDH_F_NOTIFICATION_DATA 38
+
+#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */
+#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */
+#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */
+#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */
+#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */
+
+/* Status byte for guest to report progress. */
+#define ZXDH_CONFIG_STATUS_RESET 0x00
+#define ZXDH_CONFIG_STATUS_ACK 0x01
+#define ZXDH_CONFIG_STATUS_DRIVER 0x02
+#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04
+#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
+#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
+#define ZXDH_CONFIG_STATUS_FAILED 0x80
+
+struct zxdh_net_config {
+ /* The config defining mac address (if ZXDH_NET_F_MAC) */
+ uint8_t mac[RTE_ETHER_ADDR_LEN];
+ /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */
+ uint16_t status;
+ uint16_t max_virtqueue_pairs;
+ uint16_t mtu;
+ /*
+ * speed, in units of 1Mb. All values 0 to INT_MAX are legal.
+ * Any other value stands for unknown.
+ */
+ uint32_t speed;
+ /* 0x00 - half duplex
+ * 0x01 - full duplex
+ * Any other value stands for unknown.
+ */
+ uint8_t duplex;
+} __rte_packed;
+
+/* This is the PCI capability header: */
+struct zxdh_pci_cap {
+ uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
+ uint8_t cap_next; /* Generic PCI field: next ptr. */
+ uint8_t cap_len; /* Generic PCI field: capability length */
+ uint8_t cfg_type; /* Identifies the structure. */
+ uint8_t bar; /* Where to find it. */
+ uint8_t padding[3]; /* Pad to full dword. */
+ uint32_t offset; /* Offset within bar. */
+ uint32_t length; /* Length of the structure, in bytes. */
+};
+
+/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */
+struct zxdh_pci_common_cfg {
+ /* About the whole device. */
+ uint32_t device_feature_select; /* read-write */
+ uint32_t device_feature; /* read-only */
+ uint32_t guest_feature_select; /* read-write */
+ uint32_t guest_feature; /* read-write */
+ uint16_t msix_config; /* read-write */
+ uint16_t num_queues; /* read-only */
+ uint8_t device_status; /* read-write */
+ uint8_t config_generation; /* read-only */
+
+ /* About a specific virtqueue. */
+ uint16_t queue_select; /* read-write */
+ uint16_t queue_size; /* read-write, power of 2. */
+ uint16_t queue_msix_vector; /* read-write */
+ uint16_t queue_enable; /* read-write */
+ uint16_t queue_notify_off; /* read-only */
+ uint32_t queue_desc_lo; /* read-write */
+ uint32_t queue_desc_hi; /* read-write */
+ uint32_t queue_avail_lo; /* read-write */
+ uint32_t queue_avail_hi; /* read-write */
+ uint32_t queue_used_lo; /* read-write */
+ uint32_t queue_used_hi; /* read-write */
+};
+
+static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
+{
+ return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+struct zxdh_pci_ops {
+ void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
+ void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
+
+ uint8_t (*get_status)(struct zxdh_hw *hw);
+ void (*set_status)(struct zxdh_hw *hw, uint8_t status);
+
+ uint64_t (*get_features)(struct zxdh_hw *hw);
+ void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+};
+
+struct zxdh_hw_internal {
+ const struct zxdh_pci_ops *vtpci_ops;
+};
+
+#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops)
+
+extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+extern const struct zxdh_pci_ops zxdh_dev_pci_ops;
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw);
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset,
+ void *dst, int32_t length);
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_PCI_H */
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
new file mode 100644
index 0000000000..d93705ed7b
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_QUEUE_H
+#define ZXDH_QUEUE_H
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_rxtx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ **/
+struct vring_desc {
+ uint64_t addr; /* Address (guest-physical). */
+ uint32_t len; /* Length. */
+ uint16_t flags; /* The flags as indicated above. */
+ uint16_t next; /* We chain unused descriptors via this. */
+};
+
+struct vring_avail {
+ uint16_t flags;
+ uint16_t idx;
+ uint16_t ring[0];
+};
+
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ uint32_t num;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
+struct vq_desc_extra {
+ void *cookie;
+ uint16_t ndescs;
+ uint16_t next;
+};
+
+struct virtqueue {
+ struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ uint8_t used_wrap_counter;
+ uint8_t rsv;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ uint16_t rsv1;
+ } __rte_packed vq_packed;
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
+ uint16_t vq_nentries; /**< vring desc numbers */
+ uint16_t vq_free_cnt; /**< num of desc available */
+ uint16_t vq_avail_idx; /**< sync until needed */
+ uint16_t vq_free_thresh; /**< free threshold */
+ uint16_t rsv2;
+
+ void *vq_ring_virt_mem; /**< linear address of vring*/
+ uint32_t vq_ring_size;
+
+ union {
+ struct virtnet_rx rxq;
+ struct virtnet_tx txq;
+ };
+
+ /** < physical address of vring,
+ * or virtual address for virtio_user.
+ **/
+ rte_iova_t vq_ring_mem;
+
+ /**
+ * Head of the free chain in the descriptor table. If
+ * there are no free descriptors, this will be set to
+ * VQ_RING_DESC_CHAIN_END.
+ **/
+ uint16_t vq_desc_head_idx;
+ uint16_t vq_desc_tail_idx;
+ uint16_t vq_queue_index; /**< PCI queue index */
+ uint16_t offset; /**< relative offset to obtain addr in mbuf */
+ uint16_t *notify_addr;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct vq_desc_extra vq_descx[0];
+};
+
+#endif /* ZXDH_QUEUE_H */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
new file mode 100644
index 0000000000..f12f58d84a
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_RXTX_H
+#define ZXDH_RXTX_H
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf_core.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct virtnet_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t errors;
+ uint64_t multicast;
+ uint64_t broadcast;
+ uint64_t truncated_err;
+ uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+};
+
+struct virtnet_rx {
+ struct virtqueue *vq;
+
+ /* dummy mbuf, for wraparound when processing RX ring. */
+ struct rte_mbuf fake_mbuf;
+
+ uint64_t mbuf_initializer; /* value to init mbufs. */
+ struct rte_mempool *mpool; /* mempool for mbuf allocation */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate RX ring. */
+};
+
+struct virtnet_tx {
+ struct virtqueue *vq;
+ const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate TX ring. */
+};
+
+#endif /* ZXDH_RXTX_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53702 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (2 preceding siblings ...)
2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
` (4 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 9059 bytes --]
Add msg channel and hwlock init implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 15 ++++
drivers/net/zxdh/zxdh_ethdev.h | 1 +
drivers/net/zxdh/zxdh_msg.c | 160 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 67 ++++++++++++++
5 files changed, 244 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 7db4e7bc71..2e0c8fddae 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -16,4 +16,5 @@ endif
sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
+ 'zxdh_msg.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index f34b2af7b3..6278fd61b6 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -9,6 +9,7 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
#include "zxdh_pci.h"
+#include "zxdh_msg.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret < 0)
goto err_zxdh_init;
+ ret = zxdh_msg_chan_init();
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to init bar msg chan");
+ goto err_zxdh_init;
+ }
+ hw->msg_chan_init = 1;
+
+ ret = zxdh_msg_chan_hwlock_init(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
+ zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
return ret;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index e2db2508e2..e937713d71 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -51,6 +51,7 @@ struct zxdh_hw {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
+ uint8_t msg_chan_init;
};
#ifdef __cplusplus
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
new file mode 100644
index 0000000000..51095da5a3
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_spinlock.h>
+#include <rte_cycles.h>
+#include <rte_malloc.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+
+#define ZXDH_REPS_INFO_FLAG_USABLE 0x00
+#define ZXDH_BAR_SEQID_NUM_MAX 256
+
+#define ZXDH_PCIEID_IS_PF_MASK (0x0800)
+#define ZXDH_PCIEID_PF_IDX_MASK (0x0700)
+#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff)
+#define ZXDH_PCIEID_EP_IDX_MASK (0x7000)
+/* PCIEID bit field offset */
+#define ZXDH_PCIEID_PF_IDX_OFFSET (8)
+#define ZXDH_PCIEID_EP_IDX_OFFSET (12)
+
+#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3)
+#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5)
+#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8)
+
+#define ZXDH_MAX_EP_NUM (4)
+#define ZXDH_MAX_HARD_SPINLOCK_NUM (511)
+
+#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
+#define ZXDH_FW_SHRD_OFFSET (0x5000)
+#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define ZXDH_HW_LABEL_OFFSET \
+ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT)
+
+struct dev_stat {
+ uint8_t is_mpf_scanned;
+ uint8_t is_res_init;
+ int16_t dev_cnt; /* probe cnt */
+};
+struct dev_stat g_dev_stat = {0};
+
+struct seqid_item {
+ void *reps_addr;
+ uint16_t id;
+ uint16_t buffer_len;
+ uint16_t flag;
+};
+
+struct seqid_ring {
+ uint16_t cur_id;
+ rte_spinlock_t lock;
+ struct seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX];
+};
+struct seqid_ring g_seqid_ring = {0};
+
+static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+{
+ uint16_t lock_id = 0;
+ uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET;
+ uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET;
+
+ switch (dst) {
+ /* msg to risc */
+ case ZXDH_MSG_CHAN_END_RISC:
+ lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx;
+ break;
+ /* msg to pf/vf */
+ case ZXDH_MSG_CHAN_END_VF:
+ case ZXDH_MSG_CHAN_END_PF:
+ lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx +
+ ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM);
+ break;
+ default:
+ lock_id = 0;
+ break;
+ }
+ if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM)
+ lock_id = 0;
+
+ return lock_id;
+}
+
+static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value)
+{
+ *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value;
+}
+
+static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data)
+{
+ *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
+}
+
+static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
+{
+ label_write((uint64_t)label_addr, virt_lock_id, 0);
+ spinlock_write(virt_addr, virt_lock_id, 0);
+ return 0;
+}
+
+/**
+ * Fun: PF init hard_spinlock addr
+ */
+static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
+{
+ int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
+
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + ZXDH_HW_LABEL_OFFSET);
+ lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + ZXDH_HW_LABEL_OFFSET);
+ return 0;
+}
+
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->is_pf)
+ return 0;
+ return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
+}
+
+static rte_spinlock_t chan_lock;
+int zxdh_msg_chan_init(void)
+{
+ uint16_t seq_id = 0;
+
+ g_dev_stat.dev_cnt++;
+ if (g_dev_stat.is_res_init)
+ return ZXDH_BAR_MSG_OK;
+
+ rte_spinlock_init(&chan_lock);
+ g_seqid_ring.cur_id = 0;
+ rte_spinlock_init(&g_seqid_ring.lock);
+
+ for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) {
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id];
+
+ reps_info->id = seq_id;
+ reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE;
+ }
+ g_dev_stat.is_res_init = true;
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_msg_chan_exit(void)
+{
+ if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0))
+ return ZXDH_BAR_MSG_OK;
+
+ g_dev_stat.is_res_init = false;
+ return ZXDH_BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
new file mode 100644
index 0000000000..2caf2ddaea
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_MSG_H
+#define ZXDH_MSG_H
+
+#include <stdint.h>
+
+#include <ethdev_driver.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define ZXDH_BAR0_INDEX 0
+
+enum DRIVER_TYPE {
+ ZXDH_MSG_CHAN_END_MPF = 0,
+ ZXDH_MSG_CHAN_END_PF,
+ ZXDH_MSG_CHAN_END_VF,
+ ZXDH_MSG_CHAN_END_RISC,
+};
+
+enum BAR_MSG_RTN {
+ ZXDH_BAR_MSG_OK = 0,
+ ZXDH_BAR_MSG_ERR_MSGID,
+ ZXDH_BAR_MSG_ERR_NULL,
+ ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */
+ ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */
+ ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */
+ ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */
+ ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
+ ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
+ ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
+ ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
+ ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
+ /**
+ * The sending interface parameter boundary structure pointer is empty
+ */
+ ZXDH_BAR_MSG_ERR_NULL_PARA,
+ ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
+ /**
+ * Unable to find the corresponding message processing function for this module
+ */
+ ZXDH_BAR_MSG_ERR_MODULE_NOEXIST,
+ /**
+ * The virtual address in the parameters passed in by the sending interface is empty
+ */
+ ZXDH_BAR_MSG_ERR_VIRTADDR_NULL,
+ ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */
+ ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED,
+ ZXDH_BAR_MSG_ERR_KERNEL_READY,
+ ZXDH_BAR_MSG_ERR_USR_RET_ERR,
+ ZXDH_BAR_MSG_ERR_ERR_PCIEID,
+ ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */
+};
+
+int zxdh_msg_chan_init(void);
+int zxdh_bar_msg_chan_exit(void);
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_MSG_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 17544 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 5/9] net/zxdh: add msg chan enable implementation
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (3 preceding siblings ...)
2024-10-22 12:20 ` [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-26 17:05 ` Thomas Monjalon
2024-10-22 12:20 ` [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
` (3 subsequent siblings)
8 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 28635 bytes --]
Add msg chan enable implementation to support
send msg to backend(device side) get infos.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 6 +
drivers/net/zxdh/zxdh_ethdev.h | 12 +
drivers/net/zxdh/zxdh_msg.c | 659 ++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_msg.h | 129 +++++++
4 files changed, 794 insertions(+), 12 deletions(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 6278fd61b6..2a9c1939c3 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_msg_chan_enable(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index e937713d71..4922f3d457 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -29,10 +29,22 @@ extern "C" {
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+union virport_num {
+ uint16_t vport;
+ struct {
+ uint16_t vfid:8;
+ uint16_t pfid:3;
+ uint16_t vf_flag:1;
+ uint16_t epid:3;
+ uint16_t direct_flag:1;
+ };
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ union virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
uint64_t host_features;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 51095da5a3..0e6074022f 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -15,6 +15,8 @@
#include "zxdh_msg.h"
#define ZXDH_REPS_INFO_FLAG_USABLE 0x00
+#define ZXDH_REPS_INFO_FLAG_USED 0xa0
+
#define ZXDH_BAR_SEQID_NUM_MAX 256
#define ZXDH_PCIEID_IS_PF_MASK (0x0800)
@@ -32,12 +34,85 @@
#define ZXDH_MAX_EP_NUM (4)
#define ZXDH_MAX_HARD_SPINLOCK_NUM (511)
-#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
-#define ZXDH_FW_SHRD_OFFSET (0x5000)
-#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define LOCK_PRIMARY_ID_MASK (0x8000)
+/* bar offset */
+#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000)
+#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000)
+#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
+#define ZXDH_FW_SHRD_OFFSET (0x5000)
+#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
#define ZXDH_HW_LABEL_OFFSET \
(ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT)
+#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \
+ (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET)
+#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \
+ (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET)
+#define ZXDH_CHAN_RISC_LABEL_OFFSET \
+ (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET)
+#define ZXDH_CHAN_PFVF_LABEL_OFFSET \
+ (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET)
+
+#define ZXDH_REPS_HEADER_LEN_OFFSET 1
+#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4
+#define ZXDH_REPS_HEADER_REPLYED 0xff
+
+#define ZXDH_BAR_MSG_CHAN_USABLE 0
+#define ZXDH_BAR_MSG_CHAN_USED 1
+
+#define ZXDH_BAR_MSG_POL_MASK (0x10)
+#define ZXDH_BAR_MSG_POL_OFFSET (4)
+
+#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc
+#define ZXDH_BAR_MSG_VALID_MASK 1
+#define ZXDH_BAR_MSG_VALID_OFFSET 0
+
+#define ZXDH_BAR_PF_NUM 7
+#define ZXDH_BAR_VF_NUM 256
+#define ZXDH_BAR_INDEX_PF_TO_VF 0
+#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff
+#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0
+#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0
+
+#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000)
+#define ZXDH_SPINLOCK_POLLING_SPAN_US (100)
+
+#define ZXDH_BAR_MSG_SRC_NUM 3
+#define ZXDH_BAR_MSG_SRC_MPF 0
+#define ZXDH_BAR_MSG_SRC_PF 1
+#define ZXDH_BAR_MSG_SRC_VF 2
+#define ZXDH_BAR_MSG_SRC_ERR 0xff
+#define ZXDH_BAR_MSG_DST_NUM 3
+#define ZXDH_BAR_MSG_DST_RISC 0
+#define ZXDH_BAR_MSG_DST_MPF 2
+#define ZXDH_BAR_MSG_DST_PFVF 1
+#define ZXDH_BAR_MSG_DST_ERR 0xff
+
+#define ZXDH_LOCK_TYPE_HARD (1)
+#define ZXDH_LOCK_TYPE_SOFT (0)
+#define ZXDH_BAR_INDEX_TO_RISC 0
+
+#define ZXDH_BAR_CHAN_INDEX_SEND 0
+#define ZXDH_BAR_CHAN_INDEX_RECV 1
+
+uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND},
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV},
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV}
+};
+
+uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF},
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF},
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}
+};
+
+uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD},
+ {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD},
+ {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}
+};
+
struct dev_stat {
uint8_t is_mpf_scanned;
uint8_t is_res_init;
@@ -96,6 +171,11 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da
*(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
}
+static uint8_t spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id)
+{
+ return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id);
+}
+
static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
{
label_write((uint64_t)label_addr, virt_lock_id, 0);
@@ -103,6 +183,28 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u
return 0;
}
+static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
+ uint64_t label_addr, uint16_t primary_id)
+{
+ uint32_t lock_rd_cnt = 0;
+
+ do {
+ /* read to lock */
+ uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id);
+
+ if (spl_val == 0) {
+ label_write((uint64_t)label_addr, virt_lock_id, primary_id);
+ break;
+ }
+ rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US);
+ lock_rd_cnt++;
+ } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES);
+ if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES)
+ return -1;
+
+ return 0;
+}
+
/**
* Fun: PF init hard_spinlock addr
*/
@@ -118,15 +220,6 @@ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
return 0;
}
-int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
-{
- struct zxdh_hw *hw = dev->data->dev_private;
-
- if (!hw->is_pf)
- return 0;
- return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
-}
-
static rte_spinlock_t chan_lock;
int zxdh_msg_chan_init(void)
{
@@ -158,3 +251,545 @@ int zxdh_bar_msg_chan_exit(void)
g_dev_stat.is_res_init = false;
return ZXDH_BAR_MSG_OK;
}
+
+static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid)
+{
+ struct seqid_item *seqid_reps_info = NULL;
+
+ rte_spinlock_lock(&g_seqid_ring.lock);
+ uint16_t g_id = g_seqid_ring.cur_id;
+ uint16_t count = 0;
+ int rc = 0;
+
+ do {
+ count++;
+ ++g_id;
+ g_id %= ZXDH_BAR_SEQID_NUM_MAX;
+ seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id];
+ } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) &&
+ (count < ZXDH_BAR_SEQID_NUM_MAX));
+
+ if (count >= ZXDH_BAR_SEQID_NUM_MAX) {
+ rc = -1;
+ goto out;
+ }
+ seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED;
+ g_seqid_ring.cur_id = g_id;
+ *msgid = g_id;
+ rc = ZXDH_BAR_MSG_OK;
+
+out:
+ rte_spinlock_unlock(&g_seqid_ring.lock);
+ return rc;
+}
+
+static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id)
+{
+ int ret = zxdh_bar_chan_msgid_allocate(msg_id);
+
+ if (ret != ZXDH_BAR_MSG_OK)
+ return ZXDH_BAR_MSG_ERR_MSGID;
+
+ PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id);
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id];
+
+ reps_info->reps_addr = result->recv_buffer;
+ reps_info->buffer_len = result->buffer_len;
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src)
+{
+ uint8_t src_index = 0;
+
+ switch (src) {
+ case ZXDH_MSG_CHAN_END_MPF:
+ src_index = ZXDH_BAR_MSG_SRC_MPF;
+ break;
+ case ZXDH_MSG_CHAN_END_PF:
+ src_index = ZXDH_BAR_MSG_SRC_PF;
+ break;
+ case ZXDH_MSG_CHAN_END_VF:
+ src_index = ZXDH_BAR_MSG_SRC_VF;
+ break;
+ default:
+ src_index = ZXDH_BAR_MSG_SRC_ERR;
+ break;
+ }
+ return src_index;
+}
+
+static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst)
+{
+ uint8_t dst_index = 0;
+
+ switch (dst) {
+ case ZXDH_MSG_CHAN_END_MPF:
+ dst_index = ZXDH_BAR_MSG_DST_MPF;
+ break;
+ case ZXDH_MSG_CHAN_END_PF:
+ dst_index = ZXDH_BAR_MSG_DST_PFVF;
+ break;
+ case ZXDH_MSG_CHAN_END_VF:
+ dst_index = ZXDH_BAR_MSG_DST_PFVF;
+ break;
+ case ZXDH_MSG_CHAN_END_RISC:
+ dst_index = ZXDH_BAR_MSG_DST_RISC;
+ break;
+ default:
+ dst_index = ZXDH_BAR_MSG_SRC_ERR;
+ break;
+ }
+ return dst_index;
+}
+
+static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result)
+{
+ uint8_t src_index = 0;
+ uint8_t dst_index = 0;
+
+ if (in == NULL || result == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null para.");
+ return ZXDH_BAR_MSG_ERR_NULL_PARA;
+ }
+ src_index = zxdh_bar_msg_src_index_trans(in->src);
+ dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+ if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ if (in->payload_addr == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null message.");
+ return ZXDH_BAR_MSG_ERR_BODY_NULL;
+ }
+ if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len);
+ return ZXDH_BAR_MSG_ERR_LEN;
+ }
+ if (in->virt_addr == 0 || result->recv_buffer == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL.");
+ return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL;
+ }
+ if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET)
+ PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes");
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id)
+{
+ return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL;
+}
+
+static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+ uint16_t chan_id = chan_id_tbl[src_index][dst_index];
+ uint16_t subchan_id = subchan_id_tbl[src_index][dst_index];
+
+ *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == ZXDH_MSG_CHAN_END_RISC)
+ ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+ else
+ ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+
+ return ret;
+}
+
+static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == ZXDH_MSG_CHAN_END_RISC)
+ zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET);
+ else
+ zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET);
+}
+
+static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+
+ ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr);
+ if (ret != 0)
+ PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid);
+
+ return ret;
+}
+
+static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+
+ zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr);
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static void zxdh_bar_chan_msgid_free(uint16_t msg_id)
+{
+ struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ rte_spinlock_lock(&g_seqid_ring.lock);
+ seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE;
+ PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id);
+ rte_spinlock_unlock(&g_seqid_ring.lock);
+}
+
+static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data)
+{
+ uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *(uint32_t *)(subchan_addr + algin_offset) = data;
+ return 0;
+}
+
+static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata)
+{
+ uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *pdata = *(uint32_t *)(subchan_addr + algin_offset);
+ return 0;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr,
+ struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx));
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr,
+ struct bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx);
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * ix +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ for (ix = 0; ix < remain; ix++)
+ remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix);
+
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * count +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data);
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * ix +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * count +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data);
+ for (ix = 0; ix < remain; ix++)
+ *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix);
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ data &= (~ZXDH_BAR_MSG_VALID_MASK);
+ data |= (uint32_t)valid_label;
+ zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint8_t temp_msg[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
+static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr,
+ uint16_t payload_len, struct bar_msg_header *msg_header)
+{
+ uint16_t ret = 0;
+ ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header);
+
+ ret = zxdh_bar_chan_msg_header_get(subchan_addr,
+ (struct bar_msg_header *)temp_msg);
+
+ ret = zxdh_bar_chan_msg_payload_set(subchan_addr,
+ (uint8_t *)(payload_addr), payload_len);
+
+ ret = zxdh_bar_chan_msg_payload_get(subchan_addr,
+ temp_msg, payload_len);
+
+ ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED);
+ return ret;
+}
+
+static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK))
+ return ZXDH_BAR_MSG_CHAN_USABLE;
+
+ return ZXDH_BAR_MSG_CHAN_USED;
+}
+
+static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK);
+ data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET);
+ zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr,
+ uint64_t recv_buffer, uint16_t buffer_len)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t msg_id = 0;
+ uint16_t msg_len = 0;
+
+ zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header);
+ msg_id = msg_header.msg_id;
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id);
+ return ZXDH_BAR_MSG_ERR_REPLY;
+ }
+ msg_len = msg_header.len;
+
+ if (msg_len > buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u",
+ buffer_len, msg_len + 4);
+ return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN;
+ }
+ uint8_t *recv_msg = (uint8_t *)recv_buffer;
+
+ zxdh_bar_chan_msg_payload_get(subchan_addr,
+ recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len);
+ *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len;
+ *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result)
+{
+ struct bar_msg_header msg_header = {0};
+ uint16_t seq_id = 0;
+ uint64_t subchan_addr = 0;
+ uint32_t time_out_cnt = 0;
+ uint16_t valid = 0;
+ int ret = 0;
+
+ ret = zxdh_bar_chan_send_para_check(in, result);
+ if (ret != ZXDH_BAR_MSG_OK)
+ goto exit;
+
+ ret = zxdh_bar_chan_save_recv_info(result, &seq_id);
+ if (ret != ZXDH_BAR_MSG_OK)
+ goto exit;
+
+ zxdh_bar_chan_subchan_addr_get(in, &subchan_addr);
+
+ msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC;
+ msg_header.emec = in->emec;
+ msg_header.usr = 0;
+ msg_header.rsv = 0;
+ msg_header.module_id = in->module_id;
+ msg_header.len = in->payload_len;
+ msg_header.msg_id = seq_id;
+ msg_header.src_pcieid = in->src_pcieid;
+ msg_header.dst_pcieid = in->dst_pcieid;
+
+ ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+ if (ret != ZXDH_BAR_MSG_OK) {
+ zxdh_bar_chan_msgid_free(seq_id);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header);
+
+ do {
+ rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN);
+ valid = zxdh_bar_msg_valid_stat_get(subchan_addr);
+ ++time_out_cnt;
+ } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED));
+
+ if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) {
+ zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ zxdh_bar_chan_msg_poltag_set(subchan_addr, 0);
+ PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out.");
+ ret = ZXDH_BAR_MSG_ERR_TIME_OUT;
+ } else {
+ ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr,
+ (uint64_t)result->recv_buffer, result->buffer_len);
+ }
+ zxdh_bar_chan_msgid_free(seq_id);
+ zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+
+exit:
+ return ret;
+}
+
+static int bar_get_sum(uint8_t *ptr, uint8_t len)
+{
+ uint64_t sum = 0;
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ sum += *(ptr + idx);
+
+ return (uint16_t)sum;
+}
+
+static int zxdh_bar_chan_enable(struct msix_para *para, uint16_t *vport)
+{
+ struct bar_recv_msg recv_msg = {0};
+ int ret = 0;
+ int check_token = 0;
+ int sum_res = 0;
+
+ if (!para)
+ return ZXDH_BAR_MSG_ERR_NULL;
+
+ struct msix_msg msix_msg = {
+ .pcie_id = para->pcie_id,
+ .vector_risc = para->vector_risc,
+ .vector_pfvf = para->vector_pfvf,
+ .vector_mpf = para->vector_mpf,
+ };
+ struct zxdh_pci_bar_msg in = {
+ .virt_addr = para->virt_addr,
+ .payload_addr = &msix_msg,
+ .payload_len = sizeof(msix_msg),
+ .emec = 0,
+ .src = para->driver_type,
+ .dst = ZXDH_MSG_CHAN_END_RISC,
+ .module_id = ZXDH_BAR_MODULE_MISX,
+ .src_pcieid = para->pcie_id,
+ .dst_pcieid = 0,
+ .usr = 0,
+ };
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+ if (ret != ZXDH_BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.msix_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token);
+ return ZXDH_BAR_MSG_ERR_REPLY;
+ }
+ *vport = recv_msg.msix_reps.vport;
+ PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id);
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct msix_para misx_info = {
+ .vector_risc = ZXDH_MSIX_FROM_RISCV,
+ .vector_pfvf = ZXDH_MSIX_FROM_PFVF,
+ .vector_mpf = ZXDH_MSIX_FROM_MPF,
+ .pcie_id = hw->pcie_id,
+ .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF,
+ .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET),
+ };
+
+ return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
+}
+
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->is_pf)
+ return 0;
+ return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 2caf2ddaea..aa91f499ef 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -14,6 +14,21 @@ extern "C" {
#endif
#define ZXDH_BAR0_INDEX 0
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+
+#define ZXDH_BAR_MSG_POLLING_SPAN 100
+#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+
+#define ZXDH_BAR_CHAN_MSG_SYNC 0
+
+#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
+#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header))
+#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \
+ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header))
enum DRIVER_TYPE {
ZXDH_MSG_CHAN_END_MPF = 0,
@@ -22,6 +37,13 @@ enum DRIVER_TYPE {
ZXDH_MSG_CHAN_END_RISC,
};
+enum MSG_VEC {
+ ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
+ ZXDH_MSIX_FROM_MPF,
+ ZXDH_MSIX_FROM_RISCV,
+ ZXDH_MSG_VEC_NUM,
+};
+
enum BAR_MSG_RTN {
ZXDH_BAR_MSG_OK = 0,
ZXDH_BAR_MSG_ERR_MSGID,
@@ -56,10 +78,117 @@ enum BAR_MSG_RTN {
ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */
};
+enum bar_module_id {
+ ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */
+ ZXDH_BAR_MODULE_TBL, /* 1: resource table */
+ ZXDH_BAR_MODULE_MISX, /* 2: config msix */
+ ZXDH_BAR_MODULE_SDA, /* 3: */
+ ZXDH_BAR_MODULE_RDMA, /* 4: */
+ ZXDH_BAR_MODULE_DEMO, /* 5: channel test */
+ ZXDH_BAR_MODULE_SMMU, /* 6: */
+ ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */
+ ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */
+ ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */
+ ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */
+ ZXDH_BAR_MODULE_VPORT, /* 11: get vport */
+ ZXDH_BAR_MODULE_BDF, /* 12: get bdf */
+ ZXDH_BAR_MODULE_RISC_READY, /* 13: */
+ ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */
+ ZXDH_BAR_MDOULE_NVME, /* 15: */
+ ZXDH_BAR_MDOULE_NPSDK, /* 16: */
+ ZXDH_BAR_MODULE_NP_TODO, /* 17: */
+ ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */
+ ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */
+
+ ZXDH_MODULE_FLASH = 32,
+ ZXDH_BAR_MODULE_OFFSET_GET = 33,
+ ZXDH_BAR_EVENT_OVS_WITH_VCB = 36,
+
+ ZXDH_BAR_MSG_MODULE_NUM = 100,
+};
+
+struct msix_para {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+ uint64_t virt_addr;
+ uint16_t driver_type; /* refer to DRIVER_TYPE */
+};
+
+struct msix_msg {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+};
+
+struct zxdh_pci_bar_msg {
+ uint64_t virt_addr; /* bar addr */
+ void *payload_addr;
+ uint16_t payload_len;
+ uint16_t emec;
+ uint16_t src; /* refer to BAR_DRIVER_TYPE */
+ uint16_t dst; /* refer to BAR_DRIVER_TYPE */
+ uint16_t module_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid;
+ uint16_t usr;
+};
+
+struct bar_msix_reps {
+ uint16_t pcie_id;
+ uint16_t check;
+ uint16_t vport;
+ uint16_t rsv;
+} __rte_packed;
+
+struct bar_offset_reps {
+ uint16_t check;
+ uint16_t rsv;
+ uint32_t offset;
+ uint32_t length;
+} __rte_packed;
+
+struct bar_recv_msg {
+ uint8_t reps_ok;
+ uint16_t reps_len;
+ uint8_t rsv;
+ /* */
+ union {
+ struct bar_msix_reps msix_reps;
+ struct bar_offset_reps offset_reps;
+ } __rte_packed;
+} __rte_packed;
+
+struct zxdh_msg_recviver_mem {
+ void *recv_buffer; /* first 4B is head, followed by payload */
+ uint64_t buffer_len;
+};
+
+struct bar_msg_header {
+ uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
+ uint8_t sync : 1;
+ uint8_t emec : 1; /* emergency */
+ uint8_t ack : 1; /* ack msg */
+ uint8_t poll : 1;
+ uint8_t usr : 1;
+ uint8_t rsv;
+ uint16_t module_id;
+ uint16_t len;
+ uint16_t msg_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid; /* used in PF-->VF */
+};
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result);
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 60717 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (4 preceding siblings ...)
2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
` (2 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 13359 bytes --]
Add zxdh get device backend infos,
use msg chan to send msg get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 249 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_common.h | 30 ++++
drivers/net/zxdh/zxdh_ethdev.c | 35 +++++
drivers/net/zxdh/zxdh_ethdev.h | 5 +
drivers/net/zxdh/zxdh_msg.h | 24 +++-
drivers/net/zxdh/zxdh_queue.h | 4 +
drivers/net/zxdh/zxdh_rxtx.h | 4 +
8 files changed, 351 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 2e0c8fddae..a16db47f89 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -17,4 +17,5 @@ sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
'zxdh_msg.c',
+ 'zxdh_common.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
new file mode 100644
index 0000000000..21a0cd72cf
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <string.h>
+
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+#include "zxdh_common.h"
+
+#define ZXDH_MSG_RSP_SIZE_MAX 512
+
+#define ZXDH_COMMON_TABLE_READ 0
+#define ZXDH_COMMON_TABLE_WRITE 1
+
+#define ZXDH_COMMON_FIELD_PHYPORT 6
+
+#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2)
+
+#define ZXDH_REPS_HEADER_OFFSET 4
+#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa
+
+struct zxdh_common_msg {
+ uint8_t type; /* 0:read table 1:write table */
+ uint8_t field;
+ uint16_t pcie_id;
+ uint16_t slen; /* Data length for write table */
+ uint16_t reserved;
+} __rte_packed;
+
+struct zxdh_common_rsp_hdr {
+ uint8_t rsp_status;
+ uint16_t rsp_len;
+ uint8_t reserved;
+ uint8_t payload_status;
+ uint8_t rsv;
+ uint16_t payload_len;
+} __rte_packed;
+
+struct tbl_msg_header {
+ uint8_t type; /* r/w */
+ uint8_t field;
+ uint16_t pcieid;
+ uint16_t slen;
+ uint16_t rsv;
+};
+struct tbl_msg_reps_header {
+ uint8_t check;
+ uint8_t rsv;
+ uint16_t len;
+};
+
+static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ uint8_t type,
+ uint8_t field,
+ void *buff,
+ uint16_t buff_size)
+{
+ uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size;
+
+ desc->payload_addr = rte_zmalloc(NULL, msg_len, 0);
+ if (unlikely(desc->payload_addr == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate msg_data");
+ return -ENOMEM;
+ }
+ memset(desc->payload_addr, 0, msg_len);
+ desc->payload_len = msg_len;
+ struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr;
+
+ msg_data->type = type;
+ msg_data->field = field;
+ msg_data->pcie_id = hw->pcie_id;
+ msg_data->slen = buff_size;
+ if (buff_size != 0)
+ rte_memcpy(msg_data + 1, buff, buff_size);
+
+ return 0;
+}
+
+static int32_t zxdh_send_command(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ enum bar_module_id module_id,
+ struct zxdh_msg_recviver_mem *msg_rsp)
+{
+ desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF;
+ desc->dst = ZXDH_MSG_CHAN_END_RISC;
+ desc->module_id = module_id;
+ desc->src_pcieid = hw->pcie_id;
+
+ msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX;
+ msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0);
+ if (unlikely(msg_rsp->recv_buffer == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate messages response");
+ return -ENOMEM;
+ }
+
+ if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
+ rte_free(msg_rsp->recv_buffer);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp,
+ void *buff, uint16_t len)
+{
+ struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer;
+
+ if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) {
+ PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d",
+ rsp_hdr->payload_status, rsp_hdr->payload_len);
+ return -1;
+ }
+ if (len != 0)
+ rte_memcpy(buff, rsp_hdr + 1, len);
+
+ return 0;
+}
+
+static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_msg_recviver_mem msg_rsp;
+ struct zxdh_pci_bar_msg desc;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT,
+ (void *)phyport, sizeof(*phyport));
+ return ret;
+}
+
+static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ param->pcie_id = hw->pcie_id;
+ param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
+ param->src_type = ZXDH_BAR_MODULE_TBL;
+}
+
+static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len)
+{
+ if (!res || !dev)
+ return ZXDH_BAR_MSG_ERR_NULL;
+
+ struct tbl_msg_header tbl_msg = {
+ .type = ZXDH_TBL_TYPE_READ,
+ .field = field,
+ .pcieid = dev->pcie_id,
+ .slen = 0,
+ .rsv = 0,
+ };
+
+ struct zxdh_pci_bar_msg in = {0};
+
+ in.virt_addr = dev->virt_addr;
+ in.payload_addr = &tbl_msg;
+ in.payload_len = sizeof(tbl_msg);
+ in.src = dev->src_type;
+ in.dst = ZXDH_MSG_CHAN_END_RISC;
+ in.module_id = ZXDH_BAR_MODULE_TBL;
+ in.src_pcieid = dev->pcie_id;
+
+ uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0};
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = recv_buf,
+ .buffer_len = sizeof(recv_buf),
+ };
+ int ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != ZXDH_BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR,
+ "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ struct tbl_msg_reps_header *tbl_reps =
+ (struct tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET);
+
+ if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ *len = tbl_reps->len;
+ rte_memcpy(res,
+ (recv_buf + ZXDH_REPS_HEADER_OFFSET + sizeof(struct tbl_msg_reps_header)), *len);
+ return ret;
+}
+
+static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK)
+ return -1;
+
+ *panel_id = reps;
+ return ZXDH_BAR_MSG_OK;
+}
+
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
+{
+ struct zxdh_res_para param;
+
+ zxdh_fill_res_para(dev, ¶m);
+ int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
new file mode 100644
index 0000000000..f098ae4cf9
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_COMMON_H
+#define ZXDH_COMMON_H
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct zxdh_res_para {
+ uint64_t virt_addr;
+ uint16_t pcie_id;
+ uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
+};
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_COMMON_H */
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 2a9c1939c3..ea9253dd2a 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -10,9 +10,21 @@
#include "zxdh_logs.h"
#include "zxdh_pci.h"
#include "zxdh_msg.h"
+#include "zxdh_common.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+uint16_t vport_to_vfid(union virport_num v)
+{
+ /* epid > 4 is local soft queue. return 1192 */
+ if (v.epid > 4)
+ return 1192;
+ if (v.vf_flag)
+ return v.epid * 256 + v.vfid;
+ else
+ return (v.epid * 8 + v.pfid) + 1152;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
return ret;
}
+static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw)
+{
+ if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get phyport");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport);
+
+ hw->vfid = vport_to_vfid(hw->vport);
+
+ if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get panel_id");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id);
+
+ return 0;
+}
+
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_agent_comm(eth_dev, hw);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 4922f3d457..bf495a083d 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -56,6 +56,7 @@ struct zxdh_hw {
uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint16_t vfid;
uint8_t *isr;
uint8_t weak_barriers;
@@ -63,9 +64,13 @@ struct zxdh_hw {
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
uint8_t is_pf;
+ uint8_t phyport;
+ uint8_t panel_id;
uint8_t msg_chan_init;
};
+uint16_t vport_to_vfid(union virport_num v);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index aa91f499ef..a3fca395c1 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -16,7 +16,7 @@ extern "C" {
#define ZXDH_BAR0_INDEX 0
#define ZXDH_CTRLCH_OFFSET (0x2000)
-#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
#define ZXDH_BAR_MSG_POLLING_SPAN 100
#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
@@ -107,6 +107,27 @@ enum bar_module_id {
ZXDH_BAR_MSG_MODULE_NUM = 100,
};
+enum RES_TBL_FILED {
+ ZXDH_TBL_FIELD_PCIEID = 0,
+ ZXDH_TBL_FIELD_BDF = 1,
+ ZXDH_TBL_FIELD_MSGCH = 2,
+ ZXDH_TBL_FIELD_DATACH = 3,
+ ZXDH_TBL_FIELD_VPORT = 4,
+ ZXDH_TBL_FIELD_PNLID = 5,
+ ZXDH_TBL_FIELD_PHYPORT = 6,
+ ZXDH_TBL_FIELD_SERDES_NUM = 7,
+ ZXDH_TBL_FIELD_NP_PORT = 8,
+ ZXDH_TBL_FIELD_SPEED = 9,
+ ZXDH_TBL_FIELD_HASHID = 10,
+ ZXDH_TBL_FIELD_NON,
+};
+
+enum TBL_MSG_TYPE {
+ ZXDH_TBL_TYPE_READ,
+ ZXDH_TBL_TYPE_WRITE,
+ ZXDH_TBL_TYPE_NON,
+};
+
struct msix_para {
uint16_t pcie_id;
uint16_t vector_risc;
@@ -181,6 +202,7 @@ struct bar_msg_header {
uint16_t dst_pcieid; /* used in PF-->VF */
};
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index d93705ed7b..7b48f4884b 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -102,4 +102,8 @@ struct virtqueue {
struct vq_desc_extra vq_descx[0];
};
+#ifdef __cplusplus
+}
+#endif
+
#endif /* ZXDH_QUEUE_H */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
index f12f58d84a..5372b910fc 100644
--- a/drivers/net/zxdh/zxdh_rxtx.h
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -48,4 +48,8 @@ struct virtnet_tx {
const struct rte_memzone *mz; /* mem zone to populate TX ring. */
};
+#ifdef __cplusplus
+}
+#endif
+
#endif /* ZXDH_RXTX_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 27873 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (5 preceding siblings ...)
2024-10-22 12:20 ` [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-27 17:07 ` Stephen Hemminger
2024-10-22 12:20 ` [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
8 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 24760 bytes --]
configure zxdh intr include risc,dtb. and release intr.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 301 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 8 +
drivers/net/zxdh/zxdh_msg.c | 187 ++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 10 ++
drivers/net/zxdh/zxdh_pci.c | 62 +++++++
drivers/net/zxdh/zxdh_pci.h | 12 ++
6 files changed, 580 insertions(+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index ea9253dd2a..cb8c85941a 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -25,6 +25,302 @@ uint16_t vport_to_vfid(union virport_num v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
+ }
+}
+
+
+static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (rte_intr_ack(dev->intr_handle) < 0)
+ return -1;
+
+ hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
+
+ return 0;
+}
+
+static void zxdh_devconf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+
+ if (zxdh_intr_unmask(dev) < 0)
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+}
+
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_fromriscv_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_frompfvf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = 0;
+
+ virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
+
+ tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
+ tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled)
+ return 0;
+
+ zxdh_intr_cb_unreg(dev);
+ if (rte_intr_disable(dev->intr_handle) < 0)
+ return -1;
+
+ hw->intr_enabled = 0;
+ return 0;
+}
+
+static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled) {
+ zxdh_intr_cb_reg(dev);
+ ret = rte_intr_enable(dev->intr_handle);
+ if (unlikely(ret))
+ PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
+
+ hw->intr_enabled = 1;
+ }
+ return ret;
+}
+
+static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
+
+ zxdh_queues_unbind_intr(dev);
+ zxdh_intr_disable(dev);
+
+ rte_intr_efd_disable(dev->intr_handle);
+ rte_intr_vec_list_free(dev->intr_handle);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return 0;
+}
+
+static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint8_t i;
+
+ if (!hw->risc_intr) {
+ PMD_INIT_LOG(ERR, " to allocate risc_intr");
+ hw->risc_intr = rte_zmalloc("risc_intr",
+ ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
+ if (hw->risc_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
+ return -ENOMEM;
+ }
+ }
+
+ for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
+ if (dev->intr_handle->efds[i] < 0) {
+ PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ return -1;
+ }
+
+ struct rte_intr_handle *intr_handle = hw->risc_intr + i;
+
+ intr_handle->fd = dev->intr_handle->efds[i];
+ intr_handle->type = dev->intr_handle->type;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->dtb_intr) {
+ hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
+ if (hw->dtb_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
+ return -ENOMEM;
+ }
+ }
+
+ if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
+ PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return -1;
+ }
+ hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
+ hw->dtb_intr->type = dev->intr_handle->type;
+ return 0;
+}
+
+static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+ uint16_t vec;
+
+ if (!dev->data->dev_conf.intr_conf.rxq) {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ i * 2, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ } else {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d",
+ i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec);
+ }
+ }
+ /* mask all txq intr */
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ vec = VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ return 0;
+}
+
+static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t ret = 0;
+
+ if (!rte_intr_cap_multiple(dev->intr_handle)) {
+ PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
+ return -ENOTSUP;
+ }
+ zxdh_intr_release(dev);
+ uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
+
+ if (dev->data->dev_conf.intr_conf.rxq)
+ nb_efd += dev->data->nb_rx_queues;
+
+ if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
+ PMD_INIT_LOG(ERR, "Fail to create eventfd");
+ return -1;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
+ if (zxdh_setup_risc_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ if (zxdh_setup_dtb_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_queues_bind_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_intr_enable(dev) < 0) {
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ return 0;
+
+free_intr_vec:
+ zxdh_intr_release(dev);
+ return ret;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -138,9 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret != 0)
goto err_zxdh_init;
+ ret = zxdh_configure_intr(eth_dev);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
+ zxdh_intr_release(eth_dev);
zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index bf495a083d..e62d46488c 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -8,6 +8,10 @@
#include <rte_ether.h>
#include "ethdev_driver.h"
+#include <rte_interrupts.h>
+#include <eal_interrupts.h>
+
+#include "zxdh_queue.h"
#ifdef __cplusplus
extern "C" {
@@ -44,6 +48,9 @@ struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ struct rte_intr_handle *risc_intr;
+ struct rte_intr_handle *dtb_intr;
+ struct virtqueue **vqs;
union virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -60,6 +67,7 @@ struct zxdh_hw {
uint8_t *isr;
uint8_t weak_barriers;
+ uint8_t intr_enabled;
uint8_t use_msix;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
uint8_t duplex;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 0e6074022f..01411a6309 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -95,6 +95,12 @@
#define ZXDH_BAR_CHAN_INDEX_SEND 0
#define ZXDH_BAR_CHAN_INDEX_RECV 1
+#define ZXDH_BAR_CHAN_MSG_SYNC 0
+#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0
+#define ZXDH_BAR_CHAN_MSG_EMEC 1
+#define ZXDH_BAR_CHAN_MSG_NO_ACK 0
+#define ZXDH_BAR_CHAN_MSG_ACK 1
+
uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
{ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND},
{ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV},
@@ -134,6 +140,36 @@ struct seqid_ring {
};
struct seqid_ring g_seqid_ring = {0};
+static inline const char *module_id_name(int val)
+{
+ switch (val) {
+ case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG";
+ case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL";
+ case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX";
+ case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA";
+ case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA";
+ case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO";
+ case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU";
+ case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC";
+ case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA";
+ case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM";
+ case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP";
+ case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT";
+ case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF";
+ case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY";
+ case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE";
+ case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME";
+ case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK";
+ case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO";
+ case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF";
+ case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF";
+ case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH";
+ case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET";
+ case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB";
+ default: return "NA";
+ }
+}
+
static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
@@ -793,3 +829,154 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
return 0;
return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
}
+
+static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+
+ return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+}
+
+static void zxdh_bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header,
+ uint8_t *receiver_buff)
+{
+ struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
+
+ if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id);
+ return;
+ }
+ if (msg_header->len > reps_info->buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u",
+ reps_info->buffer_len, msg_header->len + 4);
+ goto free_id;
+ }
+ uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr;
+
+ rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len);
+ *(uint16_t *)(reps_buffer + 1) = msg_header->len;
+ *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED;
+
+free_id:
+ zxdh_bar_chan_msgid_free(msg_header->msg_id);
+}
+
+zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM];
+static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header,
+ uint8_t *receiver_buff, void *dev)
+{
+ uint8_t *reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0);
+
+ if (reps_buffer == NULL)
+ return;
+
+ zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id];
+ uint16_t reps_len = 0;
+
+ recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev);
+ msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK;
+ msg_header->len = reps_len;
+ zxdh_bar_chan_msg_header_set(reply_addr, msg_header);
+ zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len);
+ zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ rte_free(reps_buffer);
+}
+
+static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type,
+ uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+ uint64_t recv_rep_addr;
+
+ if (sync == ZXDH_BAR_CHAN_MSG_SYNC)
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+ else
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id);
+
+ return recv_rep_addr;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_check(struct bar_msg_header *msg_header)
+{
+ if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) {
+ PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used.");
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ uint8_t module_id = msg_header->module_id;
+
+ if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ uint16_t len = msg_header->len;
+
+ if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len);
+ return ZXDH_BAR_MSG_ERR_LEN;
+ }
+ if (msg_recv_func_tbl[msg_header->module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register",
+ module_id_name(module_id), module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST;
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev)
+{
+ struct bar_msg_header msg_header = {0};
+ uint64_t recv_addr = 0;
+ uint16_t ret = 0;
+
+ recv_addr = zxdh_recv_addr_get(src, dst, virt_addr);
+ if (recv_addr == 0) {
+ PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst);
+ return -1;
+ }
+
+ zxdh_bar_chan_msg_header_get(recv_addr, &msg_header);
+ ret = zxdh_bar_chan_msg_header_check(&msg_header);
+
+ if (ret != ZXDH_BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret);
+ return -1;
+ }
+
+ uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0);
+ if (recved_msg == NULL) {
+ PMD_MSG_LOG(ERR, "malloc temp buff failed.");
+ return -1;
+ }
+ zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len);
+
+ uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr);
+
+ if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) {
+ zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) {
+ zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg);
+ goto exit;
+ }
+ return 0;
+
+exit:
+ rte_free(recved_msg);
+ return ZXDH_BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index a3fca395c1..14e5578033 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -15,8 +15,15 @@ extern "C" {
#define ZXDH_BAR0_INDEX 0
#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000)
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3
+#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM)
+#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1
+#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1)
+#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM)
+#define ZXDH_QUEUE_INTR_VEC_NUM 256
#define ZXDH_BAR_MSG_POLLING_SPAN 100
#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
@@ -202,6 +209,8 @@ struct bar_msg_header {
uint16_t dst_pcieid; /* used in PF-->VF */
};
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len,
+ void *reps_buffer, uint16_t *reps_len, void *dev);
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
@@ -210,6 +219,7 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
struct zxdh_msg_recviver_mem *result);
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
#ifdef __cplusplus
}
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index b0c3d0e0be..06b54b06cd 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -97,6 +97,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
rte_write32(features >> 32, &hw->common_cfg->guest_feature);
}
+static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec)
+{
+ rte_write16(vec, &hw->common_cfg->msix_config);
+ return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+ rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+ return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
+{
+ return rte_read8(hw->isr);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -104,8 +122,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_status = zxdh_set_status,
.get_features = zxdh_get_features,
.set_features = zxdh_set_features,
+ .set_queue_irq = zxdh_set_queue_irq,
+ .set_config_irq = zxdh_set_config_irq,
+ .get_isr = zxdh_get_isr,
};
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
+{
+ return VTPCI_OPS(hw)->get_isr(hw);
+}
+
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
{
return VTPCI_OPS(hw)->get_features(hw);
@@ -288,3 +314,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
return 0;
}
+
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev)
+{
+ uint8_t pos = 0;
+ int32_t ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret);
+ return ZXDH_MSIX_NONE;
+ }
+ while (pos) {
+ uint8_t cap[2] = {0};
+
+ ret = rte_pci_read_config(dev, cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap[0] == ZXDH_PCI_CAP_ID_MSIX) {
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap));
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR,
+ "failed to read pci cap at pos: %x ret %d", pos + 2, ret);
+ break;
+ }
+ if (flags & ZXDH_PCI_MSIX_ENABLE)
+ return ZXDH_MSIX_ENABLED;
+ else
+ return ZXDH_MSIX_DISABLED;
+ }
+ pos = cap[1];
+ }
+ return ZXDH_MSIX_NONE;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index f67de40962..25e34c3ba9 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -22,6 +22,13 @@ enum zxdh_msix_status {
ZXDH_MSIX_ENABLED = 2
};
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define ZXDH_PCI_ISR_INTR 0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define ZXDH_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define ZXDH_MSI_NO_VECTOR 0x7F
+
#define ZXDH_PCI_CAPABILITY_LIST 0x34
#define ZXDH_PCI_CAP_ID_VNDR 0x09
#define ZXDH_PCI_CAP_ID_MSIX 0x11
@@ -124,6 +131,9 @@ struct zxdh_pci_ops {
uint64_t (*get_features)(struct zxdh_hw *hw);
void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+ uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
+ uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
+ uint8_t (*get_isr)(struct zxdh_hw *hw);
};
struct zxdh_hw_internal {
@@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
#ifdef __cplusplus
}
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53151 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (6 preceding siblings ...)
2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3289 bytes --]
Add support for zxdh infos get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 48 +++++++++++++++++++++++++++++++++-
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index cb8c85941a..cc270d6e73 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -12,6 +12,9 @@
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#define ZXDH_MIN_RX_BUFSIZE 64
+#define ZXDH_MAX_RX_PKTLEN 14000U
+
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
uint16_t vport_to_vfid(union virport_num v)
@@ -25,6 +28,43 @@ uint16_t vport_to_vfid(union virport_num v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX);
+ dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX);
+ dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX);
+ dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE;
+ dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN;
+ dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS;
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER);
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM);
+
+ return 0;
+}
+
static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
@@ -321,6 +361,12 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+/* dev_ops for zxdh, bare necessities for basic operation */
+static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_infos_get = zxdh_dev_infos_get,
+};
+
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -377,7 +423,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
struct zxdh_hw *hw = eth_dev->data->dev_private;
int ret = 0;
- eth_dev->dev_ops = NULL;
+ eth_dev->dev_ops = &zxdh_eth_dev_ops;
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 7205 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (7 preceding siblings ...)
2024-10-22 12:20 ` [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
@ 2024-10-22 12:20 ` Junlong Wang
2024-10-24 11:31 ` [v7,9/9] " Junlong Wang
` (4 more replies)
8 siblings, 5 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 39070 bytes --]
provided zxdh dev configure ops for queue
check,reset,alloc resources,etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 118 +++++++++
drivers/net/zxdh/zxdh_common.h | 12 +
drivers/net/zxdh/zxdh_ethdev.c | 457 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 18 +-
drivers/net/zxdh/zxdh_pci.c | 97 +++++++
drivers/net/zxdh/zxdh_pci.h | 29 +++
drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++
drivers/net/zxdh/zxdh_queue.h | 175 ++++++++++++-
9 files changed, 1035 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_queue.c
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index a16db47f89..b96aa5a27e 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -18,4 +18,5 @@ sources = files(
'zxdh_pci.c',
'zxdh_msg.c',
'zxdh_common.c',
+ 'zxdh_queue.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
index 21a0cd72cf..657f35a49c 100644
--- a/drivers/net/zxdh/zxdh_common.c
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -20,6 +20,7 @@
#define ZXDH_COMMON_TABLE_WRITE 1
#define ZXDH_COMMON_FIELD_PHYPORT 6
+#define ZXDH_COMMON_FIELD_DATACH 3
#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2)
@@ -247,3 +248,120 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
return ret;
}
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ uint32_t val = *((volatile uint32_t *)(baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ *((volatile uint32_t *)(baseaddr + reg)) = val;
+}
+
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ /* check whether lock is used */
+ if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
+ return -1;
+
+ return 0;
+}
+
+int32_t zxdh_release_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
+ var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
+ zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
+ return 0;
+ }
+
+ return -1;
+}
+
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg)
+{
+ uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val)
+{
+ *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val;
+}
+
+static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_pci_bar_msg desc;
+ struct zxdh_msg_recviver_mem msg_rsp;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+ if (buff_size != 0 && buff == NULL) {
+ PMD_DRV_LOG(ERR, "Buff is invalid");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE,
+ field, buff, buff_size);
+
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_datach_set(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t buff_size = (hw->queue_num + 1) * 2;
+ void *buff = rte_zmalloc(NULL, buff_size, 0);
+
+ if (unlikely(buff == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate buff");
+ return -ENOMEM;
+ }
+ memset(buff, 0, buff_size);
+ uint16_t *pdata = (uint16_t *)buff;
+ *pdata++ = hw->queue_num;
+ uint16_t i;
+
+ for (i = 0; i < hw->queue_num; i++)
+ *(pdata + i) = hw->channel_context[i].ph_chno;
+
+ int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH,
+ (void *)buff, buff_size);
+
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to setup data channel of common table");
+
+ rte_free(buff);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
index f098ae4cf9..568aba258f 100644
--- a/drivers/net/zxdh/zxdh_common.h
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -14,6 +14,10 @@
extern "C" {
#endif
+#define ZXDH_VF_LOCK_REG 0x90
+#define ZXDH_VF_LOCK_ENABLE_MASK 0x1
+#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10
+
struct zxdh_res_para {
uint64_t virt_addr;
uint16_t pcie_id;
@@ -23,6 +27,14 @@ struct zxdh_res_para {
int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
+int32_t zxdh_release_lock(struct zxdh_hw *hw);
+int32_t zxdh_acquire_lock(struct zxdh_hw *hw);
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg);
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val);
+int32_t zxdh_datach_set(struct rte_eth_dev *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index cc270d6e73..30d164bffd 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -11,6 +11,7 @@
#include "zxdh_pci.h"
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#include "zxdh_queue.h"
#define ZXDH_MIN_RX_BUFSIZE 64
#define ZXDH_MAX_RX_PKTLEN 14000U
@@ -361,8 +362,464 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+static int32_t zxdh_features_update(struct zxdh_hw *hw,
+ const struct rte_eth_rxmode *rxmode,
+ const struct rte_eth_txmode *txmode)
+{
+ uint64_t rx_offloads = rxmode->offloads;
+ uint64_t tx_offloads = txmode->offloads;
+ uint64_t req_features = hw->guest_features;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM);
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) |
+ (1ULL << ZXDH_NET_F_GUEST_TSO6);
+
+ if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_CSUM);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) |
+ (1ULL << ZXDH_NET_F_HOST_TSO6);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_UFO);
+
+ req_features = req_features & hw->host_features;
+ hw->guest_features = req_features;
+
+ VTPCI_OPS(hw)->set_features(hw, req_features);
+
+ if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) &&
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) {
+ PMD_DRV_LOG(ERR, "rx checksum not available on this host");
+ return -ENOTSUP;
+ }
+
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) {
+ PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+static bool rx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6);
+}
+
+static bool tx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO);
+}
+
+static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t i = 0;
+
+ const char *type = NULL;
+ struct virtqueue *vq = NULL;
+ struct rte_mbuf *buf = NULL;
+ int32_t queue_type = 0;
+
+ if (hw->vqs == NULL)
+ return;
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (!vq)
+ continue;
+
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == ZXDH_VTNET_RQ)
+ type = "rxq";
+ else if (queue_type == ZXDH_VTNET_TQ)
+ type = "txq";
+ else
+ continue;
+ PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
+
+ while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL)
+ rte_pktmbuf_free(buf);
+ }
+}
+
+static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1;
+ uint16_t i = 0;
+ uint16_t j = 0;
+ uint16_t done = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ rte_delay_us_block(1000);
+ /* acquire hw lock */
+ if (zxdh_acquire_lock(hw) < 0) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout);
+ continue;
+ }
+ /* Iterate COI table and find free channel */
+ for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) {
+ uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t));
+ uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+
+ for (j = base; j < 32; j += 2) {
+ /* Got the available channel & update COI table */
+ if ((var & (1 << j)) == 0) {
+ var |= (1 << j);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+ done = 1;
+ break;
+ }
+ }
+ if (done)
+ break;
+ }
+ break;
+ }
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ zxdh_release_lock(hw);
+ /* check for no channel condition */
+ if (done != 1) {
+ PMD_INIT_LOG(ERR, "NO availd queues");
+ return -1;
+ }
+ /* reruen available channel ID */
+ return (i * 32) + j;
+}
+
+static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (hw->channel_context[lch].valid == 1) {
+ PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u",
+ lch, hw->channel_context[lch].ph_chno);
+ return hw->channel_context[lch].ph_chno;
+ }
+ int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch));
+
+ if (pch < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ hw->channel_context[lch].ph_chno = (uint16_t)pch;
+ hw->channel_context[lch].valid = 1;
+ PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch);
+ return 0;
+}
+
+static void zxdh_init_vring(struct virtqueue *vq)
+{
+ int32_t size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ virtqueue_disable_intr(vq);
+}
+
+static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx)
+{
+ char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0};
+ char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0};
+ const struct rte_memzone *mz = NULL;
+ const struct rte_memzone *hdr_mz = NULL;
+ uint32_t size = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct virtnet_rx *rxvq = NULL;
+ struct virtnet_tx *txvq = NULL;
+ struct virtqueue *vq = NULL;
+ size_t sz_hdr_mz = 0;
+ void *sw_ring = NULL;
+ int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx);
+ int32_t numa_node = dev->device->numa_node;
+ uint16_t vtpci_phy_qidx = 0;
+ uint32_t vq_size = 0;
+ int32_t ret = 0;
+
+ if (hw->channel_context[vtpci_logic_qidx].valid == 0) {
+ PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx);
+ return -EINVAL;
+ }
+ vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno;
+
+ PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d",
+ vtpci_logic_qidx, vtpci_phy_qidx, numa_node);
+
+ vq_size = ZXDH_QUEUE_DEPTH;
+
+ if (VTPCI_OPS(hw)->set_queue_num != NULL)
+ VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size);
+
+ snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx);
+
+ size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra),
+ RTE_CACHE_LINE_SIZE);
+ if (queue_type == ZXDH_VTNET_TQ) {
+ /*
+ * For each xmit packet, allocate a zxdh_net_hdr
+ * and indirect ring elements
+ */
+ sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region);
+ }
+
+ vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return -ENOMEM;
+ }
+ hw->vqs[vtpci_logic_qidx] = vq;
+
+ vq->hw = hw;
+ vq->vq_queue_index = vtpci_phy_qidx;
+ vq->vq_nentries = vq_size;
+
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ if (queue_type == ZXDH_VTNET_RQ)
+ vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ ZXDH_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(vq_name);
+ if (mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+
+ memset(mz->addr, 0, mz->len);
+
+ vq->vq_ring_mem = mz->iova;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ zxdh_init_vring(vq);
+
+ if (sz_hdr_mz) {
+ snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+ dev->data->port_id, vtpci_phy_qidx);
+ hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ RTE_CACHE_LINE_SIZE);
+ if (hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ hdr_mz = rte_memzone_lookup(vq_hdr_name);
+ if (hdr_mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+ }
+
+ if (queue_type == ZXDH_VTNET_RQ) {
+ size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+
+ vq->sw_ring = sw_ring;
+ rxvq = &vq->rxq;
+ rxvq->vq = vq;
+ rxvq->port_id = dev->data->port_id;
+ rxvq->mz = mz;
+ } else { /* queue_type == VTNET_TQ */
+ txvq = &vq->txq;
+ txvq->vq = vq;
+ txvq->port_id = dev->data->port_id;
+ txvq->mz = mz;
+ txvq->virtio_net_hdr_mz = hdr_mz;
+ txvq->virtio_net_hdr_mem = hdr_mz->iova;
+ }
+
+ vq->offset = offsetof(struct rte_mbuf, buf_iova);
+ if (queue_type == ZXDH_VTNET_TQ) {
+ struct zxdh_tx_region *txr = hdr_mz->addr;
+ uint32_t i;
+
+ memset(txr, 0, vq_size * sizeof(*txr));
+ for (i = 0; i < vq_size; i++) {
+ /* first indirect descriptor is always the tx header */
+ struct vring_packed_desc *start_dp = txr[i].tx_packed_indir;
+
+ vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir));
+ start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) +
+ offsetof(struct zxdh_tx_region, tx_hdr);
+ /* length will be updated to actual pi hdr size when xmit pkt */
+ start_dp->len = 0;
+ }
+ }
+ if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ return -EINVAL;
+ }
+ return 0;
+fail_q_alloc:
+ rte_free(sw_ring);
+ rte_memzone_free(hdr_mz);
+ rte_memzone_free(mz);
+ rte_free(vq);
+ return ret;
+}
+
+static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq)
+{
+ uint16_t lch;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vqs");
+ return -ENOMEM;
+ }
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (zxdh_acquire_channel(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire the channels");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ if (zxdh_init_queue(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to alloc virtio queue");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+static int32_t zxdh_dev_configure(struct rte_eth_dev *dev)
+{
+ const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint32_t nr_vq = 0;
+ int32_t ret = 0;
+
+ if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+ if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues,
+ ZXDH_QUEUES_NUM_MAX);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+
+ ret = zxdh_features_update(hw, rxmode, txmode);
+ if (ret < 0)
+ return ret;
+
+ /* check if lsc interrupt feature is enabled */
+ if (dev->data->dev_conf.intr_conf.lsc) {
+ if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+ PMD_DRV_LOG(ERR, "link status not supported by host");
+ return -ENOTSUP;
+ }
+ }
+
+ hw->has_tx_offload = tx_offload_enabled(hw);
+ hw->has_rx_offload = rx_offload_enabled(hw);
+
+ nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues;
+ if (nr_vq == hw->queue_num)
+ return 0;
+
+ PMD_DRV_LOG(DEBUG, "queue changed need reset ");
+ /* Reset the device although not necessary at startup */
+ zxdh_vtpci_reset(hw);
+
+ /* Tell the host we've noticed this device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK);
+
+ /* Tell the host we've known how to drive the device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER);
+ /* The queue needs to be released when reconfiguring*/
+ if (hw->vqs != NULL) {
+ zxdh_dev_free_mbufs(dev);
+ zxdh_free_queues(dev);
+ }
+
+ hw->queue_num = nr_vq;
+ ret = zxdh_alloc_queues(dev, nr_vq);
+ if (ret < 0)
+ return ret;
+
+ zxdh_datach_set(dev);
+
+ if (zxdh_configure_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to configure interrupt");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+
+ zxdh_vtpci_reinit_complete(hw);
+
+ return ret;
+}
+
/* dev_ops for zxdh, bare necessities for basic operation */
static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_configure = zxdh_dev_configure,
.dev_infos_get = zxdh_dev_infos_get,
};
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index e62d46488c..a77db614aa 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -11,8 +11,6 @@
#include <rte_interrupts.h>
#include <eal_interrupts.h>
-#include "zxdh_queue.h"
-
#ifdef __cplusplus
extern "C" {
#endif
@@ -32,6 +30,13 @@ extern "C" {
#define ZXDH_NUM_BARS 2
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+#define ZXDH_QUEUE_DEPTH 1024
+#define ZXDH_QUEUES_BASE 0
+#define ZXDH_TOTAL_QUEUES_NUM 4096
+#define ZXDH_QUEUES_NUM_MAX 256
+#define ZXDH_QUERES_SHARE_BASE (0x5000)
+
+#define ZXDH_MBUF_BURST_SZ 64
union virport_num {
uint16_t vport;
@@ -44,6 +49,11 @@ union virport_num {
};
};
+struct chnl_context {
+ uint16_t valid;
+ uint16_t ph_chno;
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
@@ -51,6 +61,7 @@ struct zxdh_hw {
struct rte_intr_handle *risc_intr;
struct rte_intr_handle *dtb_intr;
struct virtqueue **vqs;
+ struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX];
union virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -64,6 +75,7 @@ struct zxdh_hw {
uint16_t device_id;
uint16_t port_id;
uint16_t vfid;
+ uint16_t queue_num;
uint8_t *isr;
uint8_t weak_barriers;
@@ -75,6 +87,8 @@ struct zxdh_hw {
uint8_t phyport;
uint8_t panel_id;
uint8_t msg_chan_init;
+ uint8_t has_tx_offload;
+ uint8_t has_rx_offload;
};
uint16_t vport_to_vfid(union virport_num v);
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index 06b54b06cd..e4422c44a1 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -115,6 +115,86 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
return rte_read8(hw->isr);
}
+static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ rte_write16(vq_size, &hw->common_cfg->queue_size);
+}
+
+static int32_t check_vq_phys_addr_ok(struct virtqueue *vq)
+{
+ if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) {
+ PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+ return 0;
+ }
+ return 1;
+}
+
+static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+ rte_write32(val & ((1ULL << 32) - 1), lo);
+ rte_write32(val >> 32, hi);
+}
+
+static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ uint64_t desc_addr = 0;
+ uint64_t avail_addr = 0;
+ uint64_t used_addr = 0;
+ uint16_t notify_off = 0;
+
+ if (!check_vq_phys_addr_ok(vq))
+ return -1;
+
+ desc_addr = vq->vq_ring_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ if (vtpci_packed_queue(vq->hw)) {
+ used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)),
+ ZXDH_PCI_VRING_ALIGN);
+ } else {
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN);
+ }
+
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */
+ notify_off = 0;
+ vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+ notify_off * hw->notify_off_multiplier);
+
+ rte_write16(1, &hw->common_cfg->queue_enable);
+
+ return 0;
+}
+
+static void zxdh_del_queue(struct zxdh_hw *hw, struct virtqueue *vq)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -125,6 +205,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_queue_irq = zxdh_set_queue_irq,
.set_config_irq = zxdh_set_config_irq,
.get_isr = zxdh_get_isr,
+ .get_queue_num = zxdh_get_queue_num,
+ .set_queue_num = zxdh_set_queue_num,
+ .setup_queue = zxdh_setup_queue,
+ .del_queue = zxdh_del_queue,
};
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
@@ -151,6 +235,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw)
PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
}
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw)
+{
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK);
+}
+
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ if (status != ZXDH_CONFIG_STATUS_RESET)
+ status |= VTPCI_OPS(hw)->get_status(hw);
+
+ VTPCI_OPS(hw)->set_status(hw, status);
+}
+
static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
{
uint8_t bar = cap->bar;
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index 25e34c3ba9..4c9fea7c6d 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -8,7 +8,9 @@
#include <stdint.h>
#include <stdbool.h>
+#include <rte_pci.h>
#include <bus_pci_driver.h>
+#include <ethdev_driver.h>
#include "zxdh_ethdev.h"
@@ -34,8 +36,20 @@ enum zxdh_msix_status {
#define ZXDH_PCI_CAP_ID_MSIX 0x11
#define ZXDH_PCI_MSIX_ENABLE 0x8000
+#define ZXDH_PCI_VRING_ALIGN 4096
+#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
+#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
+#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */
#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
+#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
+#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */
+
+#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */
+#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */
+#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */
#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
@@ -60,6 +74,8 @@ enum zxdh_msix_status {
#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define ZXDH_CONFIG_STATUS_FAILED 0x80
+#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12
+
struct zxdh_net_config {
/* The config defining mac address (if ZXDH_NET_F_MAC) */
uint8_t mac[RTE_ETHER_ADDR_LEN];
@@ -122,6 +138,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_F_RING_PACKED);
+}
+
struct zxdh_pci_ops {
void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
@@ -134,6 +155,11 @@ struct zxdh_pci_ops {
uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec);
uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
uint8_t (*get_isr)(struct zxdh_hw *hw);
+ uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id);
+ void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size);
+
+ int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
+ void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq);
};
struct zxdh_hw_internal {
@@ -156,6 +182,9 @@ uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw);
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c
new file mode 100644
index 0000000000..4f12271f66
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.c
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "zxdh_queue.h"
+#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+#include "zxdh_common.h"
+#include "zxdh_msg.h"
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq)
+{
+ struct rte_mbuf *cookie = NULL;
+ int32_t idx = 0;
+
+ if (vq == NULL)
+ return NULL;
+
+ for (idx = 0; idx < vq->vq_nentries; idx++) {
+ cookie = vq->vq_descx[idx].cookie;
+ if (cookie != NULL) {
+ vq->vq_descx[idx].cookie = NULL;
+ return cookie;
+ }
+ }
+ return NULL;
+}
+
+static int32_t zxdh_release_channel(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t var = 0;
+ uint32_t addr = 0;
+ uint32_t widx = 0;
+ uint32_t bidx = 0;
+ uint16_t pch = 0;
+ uint16_t lch = 0;
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ if (zxdh_acquire_lock(hw) != 0) {
+ PMD_INIT_LOG(ERR,
+ "Could not acquire lock to release channel, timeout %d", timeout);
+ continue;
+ }
+ break;
+ }
+
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Acquire lock timeout");
+ return -1;
+ }
+
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (hw->channel_context[lch].valid == 0) {
+ PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
+ continue;
+ }
+
+ pch = hw->channel_context[lch].ph_chno;
+ widx = pch / 32;
+ bidx = pch % 32;
+
+ addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
+ var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+ var &= ~(1 << bidx);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+
+ hw->channel_context[lch].valid = 0;
+ hw->channel_context[lch].ph_chno = 0;
+ }
+
+ zxdh_release_lock(hw);
+
+ return 0;
+}
+
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx)
+{
+ if (vtpci_queue_idx % 2 == 0)
+ return ZXDH_VTNET_RQ;
+ else
+ return ZXDH_VTNET_TQ;
+}
+
+int32_t zxdh_free_queues(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ struct virtqueue *vq = NULL;
+ int32_t queue_type = 0;
+ uint16_t i = 0;
+
+ if (hw->vqs == NULL)
+ return 0;
+
+ if (zxdh_release_channel(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to clear coi table");
+ return -1;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (vq == NULL)
+ continue;
+
+ VTPCI_OPS(hw)->del_queue(hw, vq);
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == ZXDH_VTNET_RQ) {
+ rte_free(vq->sw_ring);
+ rte_memzone_free(vq->rxq.mz);
+ } else if (queue_type == ZXDH_VTNET_TQ) {
+ rte_memzone_free(vq->txq.mz);
+ rte_memzone_free(vq->txq.virtio_net_hdr_mz);
+ }
+
+ rte_free(vq);
+ hw->vqs[i] = NULL;
+ PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
+ }
+
+ rte_free(hw->vqs);
+ hw->vqs = NULL;
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index 7b48f4884b..4038541286 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -11,11 +11,30 @@
#include "zxdh_ethdev.h"
#include "zxdh_rxtx.h"
+#include "zxdh_pci.h"
#ifdef __cplusplus
extern "C" {
#endif
+enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 };
+
+#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32
+#define ZXDH_RQ_QUEUE_IDX 0
+#define ZXDH_TQ_QUEUE_IDX 1
+#define ZXDH_MAX_TX_INDIRECT 8
+
+/* This marks a buffer as write-only (otherwise read-only). */
+#define ZXDH_VRING_DESC_F_WRITE 2
+/* This flag means the descriptor was made available by the driver */
+#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7))
+
+#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0
+#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1
+#define ZXDH_RING_EVENT_FLAGS_DESC 0x2
+
+#define ZXDH_VQ_RING_DESC_CHAIN_END 32768
+
/** ring descriptors: 16 bytes.
* These can chain together via "next".
**/
@@ -26,6 +45,19 @@ struct vring_desc {
uint16_t next; /* We chain unused descriptors via this. */
};
+struct vring_used_elem {
+ /* Index of start of used descriptor chain. */
+ uint32_t id;
+ /* Total length of the descriptor chain which was written to. */
+ uint32_t len;
+};
+
+struct vring_used {
+ uint16_t flags;
+ uint16_t idx;
+ struct vring_used_elem ring[0];
+};
+
struct vring_avail {
uint16_t flags;
uint16_t idx;
@@ -91,7 +123,7 @@ struct virtqueue {
/**
* Head of the free chain in the descriptor table. If
* there are no free descriptors, this will be set to
- * VQ_RING_DESC_CHAIN_END.
+ * ZXDH_VQ_RING_DESC_CHAIN_END.
**/
uint16_t vq_desc_head_idx;
uint16_t vq_desc_tail_idx;
@@ -102,6 +134,147 @@ struct virtqueue {
struct vq_desc_extra vq_descx[0];
};
+struct zxdh_type_hdr {
+ uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */
+ uint8_t pd_len;
+ uint8_t num_buffers;
+ uint8_t reserved;
+} __rte_packed; /* 4B */
+
+struct zxdh_pi_hdr {
+ uint8_t pi_len;
+ uint8_t pkt_type;
+ uint16_t vlan_id;
+ uint32_t ipv6_extend;
+ uint16_t l3_offset;
+ uint16_t l4_offset;
+ uint8_t phy_port;
+ uint8_t pkt_flag_hi8;
+ uint16_t pkt_flag_lw16;
+ union {
+ struct {
+ uint64_t sa_idx;
+ uint8_t reserved_8[8];
+ } dl;
+ struct {
+ uint32_t lro_flag;
+ uint32_t lro_mss;
+ uint16_t err_code;
+ uint16_t pm_id;
+ uint16_t pkt_len;
+ uint8_t reserved[2];
+ } ul;
+ };
+} __rte_packed; /* 32B */
+
+struct zxdh_pd_hdr_dl {
+ uint32_t ol_flag;
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t dst_vfid;
+ uint32_t svlan_insert;
+ uint32_t cvlan_insert;
+} __rte_packed; /* 16B */
+
+struct zxdh_net_hdr_dl {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_dl pd_hdr; /* 16B */
+} __rte_packed;
+
+struct zxdh_pd_hdr_ul {
+ uint32_t pkt_flag;
+ uint32_t rss_hash;
+ uint32_t fd;
+ uint32_t striped_vlan_tci;
+ /* ovs */
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t src_vfid;
+ /* */
+ uint16_t pkt_type_out;
+ uint16_t pkt_type_in;
+} __rte_packed; /* 24B */
+
+struct zxdh_net_hdr_ul {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_ul pd_hdr; /* 24B */
+} __rte_packed; /* 60B */
+
+struct zxdh_tx_region {
+ struct zxdh_net_hdr_dl tx_hdr;
+ union {
+ struct vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT];
+ struct vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT];
+ } __rte_packed;
+};
+
+static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align)
+{
+ size_t size;
+
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
+ size = num * sizeof(struct vring_desc);
+ size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_used) + (num * sizeof(struct vring_used_elem));
+ return size;
+}
+
+static inline void vring_init_packed(struct vring_packed *vr, uint8_t *p,
+ unsigned long align, uint32_t num)
+{
+ vr->num = num;
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->driver = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device = (struct vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct vring_packed_desc_event)), align);
+}
+
+static inline void vring_desc_init_packed(struct virtqueue *vq, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END;
+}
+
+static inline void vring_desc_init_indirect_packed(struct vring_packed_desc *dp, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n; i++) {
+ dp[i].id = (uint16_t)i;
+ dp[i].flags = ZXDH_VRING_DESC_F_WRITE;
+ }
+}
+
+static inline void virtqueue_disable_intr(struct virtqueue *vq)
+{
+ if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow;
+ }
+}
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq);
+int32_t zxdh_free_queues(struct rte_eth_dev *dev);
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx);
+
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 91442 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v7,9/9] net/zxdh: add zxdh dev configure ops
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
@ 2024-10-24 11:31 ` Junlong Wang
2024-10-25 9:48 ` Junlong Wang
` (3 subsequent siblings)
4 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-24 11:31 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 100 bytes --]
If you have time, hope you can check if the zxdh driver still needs to be modified.
Best regards!
[-- Attachment #1.1.2: Type: text/html , Size: 224 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v7,9/9] net/zxdh: add zxdh dev configure ops
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-24 11:31 ` [v7,9/9] " Junlong Wang
@ 2024-10-25 9:48 ` Junlong Wang
2024-10-26 2:32 ` Junlong Wang
` (2 subsequent siblings)
4 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-25 9:48 UTC (permalink / raw)
To: thomas, ferruh.yigit; +Cc: dev, stephen, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 123 bytes --]
Hi maintainers,
If you have time, hope you can check if the zxdh driver still needs to be modified.
Best regards!
[-- Attachment #1.1.2: Type: text/html , Size: 257 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [v7,9/9] net/zxdh: add zxdh dev configure ops
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-24 11:31 ` [v7,9/9] " Junlong Wang
2024-10-25 9:48 ` Junlong Wang
@ 2024-10-26 2:32 ` Junlong Wang
2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger
2024-10-27 16:58 ` Stephen Hemminger
4 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-26 2:32 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, thomas, stephen, wang.yong19
[-- Attachment #1.1.1: Type: text/plain, Size: 416 bytes --]
Hi Ferruh,
I hope this message finds you well.
I have made the revisions according to your review comments, and the final version was submitted on Oct 22th. Since then, I have not received any feedback from the community.
I would appreciate it if you could provide any suggestions for modifications. If there is no problem with the submitted driver.
Thank you for your time and assistance.
Thanks!
[-- Attachment #1.1.2: Type: text/html , Size: 806 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 5/9] net/zxdh: add msg chan enable implementation
2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-10-26 17:05 ` Thomas Monjalon
0 siblings, 0 replies; 76+ messages in thread
From: Thomas Monjalon @ 2024-10-26 17:05 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, ferruh.yigit, stephen, wang.yong19
22/10/2024 14:20, Junlong Wang:
> +enum MSG_VEC {
> + ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
> + ZXDH_MSIX_FROM_MPF,
> + ZXDH_MSIX_FROM_RISCV,
> + ZXDH_MSG_VEC_NUM,
> +};
> +
> enum BAR_MSG_RTN {
> ZXDH_BAR_MSG_OK = 0,
> ZXDH_BAR_MSG_ERR_MSGID,
Again these enums are uppercased and not prefixed.
Please check all other structs which are not prefixed.
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
` (2 preceding siblings ...)
2024-10-26 2:32 ` Junlong Wang
@ 2024-10-27 16:40 ` Stephen Hemminger
2024-10-27 17:03 ` Stephen Hemminger
2024-10-27 16:58 ` Stephen Hemminger
4 siblings, 1 reply; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-27 16:40 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19
On Tue, 22 Oct 2024 20:20:42 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> +int32_t zxdh_acquire_lock(struct zxdh_hw *hw)
> +{
> + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
> +
> + /* check whether lock is used */
> + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
> + return -1;
> +
> + return 0;
> +}
> +
> +int32_t zxdh_release_lock(struct zxdh_hw *hw)
> +{
> + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
> +
> + if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
> + var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
> + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
> + return 0;
> + }
> +
> + return -1;
> +}
> +
It is your driver, so you are free to name functions as appropriate.
But it would make more sense to make the hardware lock follow the pattern
of existing spinlock's etc.
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-10-27 16:47 ` Stephen Hemminger
2024-10-27 16:47 ` Stephen Hemminger
1 sibling, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-27 16:47 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19
On Tue, 22 Oct 2024 20:20:36 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> +
> +#ifdef RTE_EXEC_ENV_LINUX
> + #include <dirent.h>
> + #include <fcntl.h>
> +#endif
> +
Why is this necessary? Driver builds without this
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-27 16:47 ` Stephen Hemminger
@ 2024-10-27 16:47 ` Stephen Hemminger
1 sibling, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-27 16:47 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19
On Tue, 22 Oct 2024 20:20:36 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> Add device pci init implementation,
> to obtain PCI capability and read configuration, etc.
>
> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
> ---
> drivers/net/zxdh/meson.build | 1 +
> drivers/net/zxdh/zxdh_ethdev.c | 43 +++++
> drivers/net/zxdh/zxdh_ethdev.h | 22 ++-
> drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++
> drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++
> drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++
> drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++
> 7 files changed, 660 insertions(+), 3 deletions(-)
> create mode 100644 drivers/net/zxdh/zxdh_pci.c
> create mode 100644 drivers/net/zxdh/zxdh_pci.h
> create mode 100644 drivers/net/zxdh/zxdh_queue.h
> create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
DPDK has switched from GNU special zero length arrays to the C99
standard flexible arrays.
### [PATCH] net/zxdh: add zxdh device pci init implementation
ERROR:FLEXIBLE_ARRAY: Use C99 flexible arrays - see https://docs.kernel.org/process/deprecated.html#zero-length-and-one-element-arrays
#620: FILE: drivers/net/zxdh/zxdh_queue.h:33:
+ uint16_t ring[0];
+};
ERROR:FLEXIBLE_ARRAY: Use C99 flexible arrays - see https://docs.kernel.org/process/deprecated.html#zero-length-and-one-element-arrays
#690: FILE: drivers/net/zxdh/zxdh_queue.h:103:
+ struct vq_desc_extra vq_descx[0];
+};
total: 2 errors, 0 warnings, 0 checks, 698 lines checked
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
` (3 preceding siblings ...)
2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger
@ 2024-10-27 16:58 ` Stephen Hemminger
4 siblings, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-27 16:58 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19
On Tue, 22 Oct 2024 20:20:42 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> provided zxdh dev configure ops for queue
> check,reset,alloc resources,etc.
>
> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
> ---
> drivers/net/zxdh/meson.build | 1 +
> drivers/net/zxdh/zxdh_common.c | 118 +++++++++
> drivers/net/zxdh/zxdh_common.h | 12 +
> drivers/net/zxdh/zxdh_ethdev.c | 457 +++++++++++++++++++++++++++++++++
> drivers/net/zxdh/zxdh_ethdev.h | 18 +-
> drivers/net/zxdh/zxdh_pci.c | 97 +++++++
> drivers/net/zxdh/zxdh_pci.h | 29 +++
> drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++
> drivers/net/zxdh/zxdh_queue.h | 175 ++++++++++++-
> 9 files changed, 1035 insertions(+), 3 deletions(-)
> create mode 100644 drivers/net/zxdh/zxdh_queue.c
In future, DPDK wants to re-enable the Gcc warning for taking
address of packed member. When I enable that (in config/meson.build)
this shows up.
[1478/3078] Compiling C object drivers/libtmp_rte_net_zxdh.a.p/net_zxdh_zxdh_ethdev.c.o
../drivers/net/zxdh/zxdh_ethdev.c: In function ‘zxdh_init_vring’:
../drivers/net/zxdh/zxdh_ethdev.c:541:27: warning: taking address of packed member of ‘struct <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member]
541 | vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
| ^~~~~~~~~~~~~~~~~~~
../drivers/net/zxdh/zxdh_ethdev.c: In function ‘zxdh_init_queue’:
../drivers/net/zxdh/zxdh_ethdev.c:682:62: warning: taking address of packed member of ‘union <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member]
682 | struct vring_packed_desc *start_dp = txr[i].tx_packed_indir;
| ^~~
[1479/3078] Compiling C object drivers/libtmp_rte_net_virtio.a.p/net_virtio_virtio_ethdev
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops
2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger
@ 2024-10-27 17:03 ` Stephen Hemminger
0 siblings, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-27 17:03 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19
On Sun, 27 Oct 2024 09:40:48 -0700
Stephen Hemminger <stephen@networkplumber.org> wrote:
> On Tue, 22 Oct 2024 20:20:42 +0800
> Junlong Wang <wang.junlong1@zte.com.cn> wrote:
>
> > +int32_t zxdh_acquire_lock(struct zxdh_hw *hw)
> > +{
> > + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
> > +
> > + /* check whether lock is used */
> > + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
> > + return -1;
> > +
> > + return 0;
> > +}
> > +
> > +int32_t zxdh_release_lock(struct zxdh_hw *hw)
> > +{
> > + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
> > +
> > + if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
> > + var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
> > + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
> > + return 0;
> > + }
> > +
> > + return -1;
> > +}
> > +
>
> It is your driver, so you are free to name functions as appropriate.
>
> But it would make more sense to make the hardware lock follow the pattern
> of existing spinlock's etc.
I am suggesting:
static bool zxdh_try_lock(hw);
void zxdh_release_lock(hw);
also, move the loop into a new function, something like:
int zxdh_timedlock(hw, uint32_t us);
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation
2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
@ 2024-10-27 17:07 ` Stephen Hemminger
0 siblings, 0 replies; 76+ messages in thread
From: Stephen Hemminger @ 2024-10-27 17:07 UTC (permalink / raw)
To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19
On Tue, 22 Oct 2024 20:20:40 +0800
Junlong Wang <wang.junlong1@zte.com.cn> wrote:
> +/* Interrupt handler triggered by NIC for handling specific interrupt. */
> +static void zxdh_fromriscv_intr_handler(void *param)
> +{
> + struct rte_eth_dev *dev = param;
> + struct zxdh_hw *hw = dev->data->dev_private;
> + uint64_t virt_addr = 0;
> +
> + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
There is no need to initialize a variable like virt_addr which is set later.
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 0/9] net/zxdh: introduce net zxdh driver
2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
` (8 more replies)
0 siblings, 9 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3158 bytes --]
v8:
- fix flexible arrays銆乄address-of-packed-member error.
- all structs銆乪num銆乨efine ,etc use zxdh/ZXDH_ prefixed.
- use zxdh_try/release_lock,and move loop into zxdh_timedlock,
make hardware lock follow spinlock pattern.
v7:
- add release notes and modify zxdh.rst issues.
- avoid use pthread and use rte_spinlock_lock.
- using the prefix ZXDH_ before some definitions.
- resole issues according to thomas's comments.
v6:
- Resolve ci/intel compilation issues.
- fix meson.build indentation in earlier patch.
V5:
- split driver into multiple patches,part of the zxdh driver,
later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc.
- fix errors reported by scripts.
- move the product link in zxdh.rst.
- fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64.
- modify other comments according to Ferruh's comments.
Junlong Wang (9):
net/zxdh: add zxdh ethdev pmd driver
net/zxdh: add logging implementation
net/zxdh: add zxdh device pci init implementation
net/zxdh: add msg chan and msg hwlock init
net/zxdh: add msg chan enable implementation
net/zxdh: add zxdh get device backend infos
net/zxdh: add configure zxdh intr implementation
net/zxdh: add zxdh dev infos get ops
net/zxdh: add zxdh dev configure ops
MAINTAINERS | 6 +
doc/guides/nics/features/zxdh.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 31 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 22 +
drivers/net/zxdh/zxdh_common.c | 385 ++++++++++
drivers/net/zxdh/zxdh_common.h | 42 ++
drivers/net/zxdh/zxdh_ethdev.c | 994 +++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 104 +++
drivers/net/zxdh/zxdh_logs.h | 40 +
drivers/net/zxdh/zxdh_msg.c | 985 ++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 230 ++++++
drivers/net/zxdh/zxdh_pci.c | 445 +++++++++++
drivers/net/zxdh/zxdh_pci.h | 192 +++++
drivers/net/zxdh/zxdh_queue.c | 123 +++
drivers/net/zxdh/zxdh_queue.h | 282 +++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 ++
19 files changed, 3951 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
create mode 100644 drivers/net/zxdh/zxdh_logs.h
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.c
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6229 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-30 9:01 ` [PATCH v8 2/9] net/zxdh: add logging implementation Junlong Wang
` (7 subsequent siblings)
8 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 8771 bytes --]
Add basic zxdh ethdev init and register PCI probe functions
Update doc files.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
MAINTAINERS | 6 ++
doc/guides/nics/features/zxdh.ini | 9 +++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 31 +++++++++
doc/guides/rel_notes/release_24_11.rst | 4 ++
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 18 +++++
drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++
9 files changed, 206 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
diff --git a/MAINTAINERS b/MAINTAINERS
index ab64230920..a998bf0fd5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1043,6 +1043,12 @@ F: drivers/net/virtio/
F: doc/guides/nics/virtio.rst
F: doc/guides/nics/features/virtio*.ini
+ZTE zxdh
+M: Lijie Shan <shan.lijie@zte.com.cn>
+F: drivers/net/zxdh/
+F: doc/guides/nics/zxdh.rst
+F: doc/guides/nics/features/zxdh.ini
+
Wind River AVP
M: Steven Webster <steven.webster@windriver.com>
M: Matt Peters <matt.peters@windriver.com>
diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
new file mode 100644
index 0000000000..05c8091ed7
--- /dev/null
+++ b/doc/guides/nics/features/zxdh.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'zxdh' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..8e371ac4a5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -69,3 +69,4 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ zxdh
diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
new file mode 100644
index 0000000000..920ff5175e
--- /dev/null
+++ b/doc/guides/nics/zxdh.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 ZTE Corporation.
+
+ZXDH Poll Mode Driver
+======================
+
+The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
+for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
+the ZTE Ethernet Controller E310/E312.
+
+- Learn about ZXDH NX Series Ethernet Controller NICs using
+ `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
+
+Features
+--------
+
+Features of the ZXDH PMD are:
+
+- Multi arch support: x86_64, ARMv8.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+
+X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet.
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index fa4822d928..986a611e08 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -161,6 +161,10 @@ New Features
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
+ * **Updated ZTE zxdh net driver.**
+
+ * Added ethdev driver support for zxdh NX Series Ethernet Controller.
+
* **Added cryptodev queue pair reset support.**
A new API ``rte_cryptodev_queue_pair_reset`` is added
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b782..0a12914534 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -62,6 +62,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
new file mode 100644
index 0000000000..932fb1c835
--- /dev/null
+++ b/drivers/net/zxdh/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 ZTE Corporation
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+sources = files(
+ 'zxdh_ethdev.c',
+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
new file mode 100644
index 0000000000..5b6c9ec1bf
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <ethdev_pci.h>
+#include <bus_pci_driver.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ int ret = 0;
+
+ eth_dev->dev_ops = NULL;
+
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL)
+ return -ENOMEM;
+
+ memset(hw, 0, sizeof(*hw));
+ hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
+ if (hw->bar_addr[0] == 0)
+ return -EIO;
+
+ hw->device_id = pci_dev->id.device_id;
+ hw->port_id = eth_dev->data->port_id;
+ hw->eth_dev = eth_dev;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ hw->is_pf = 0;
+
+ if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID ||
+ pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) {
+ hw->is_pf = 1;
+ }
+
+ return ret;
+}
+
+static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct zxdh_hw),
+ zxdh_eth_dev_init);
+}
+
+static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+ int ret = 0;
+
+ return ret;
+}
+
+static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ int ret = 0;
+
+ ret = zxdh_dev_close(eth_dev);
+
+ return ret;
+}
+
+static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
+
+ return ret;
+}
+
+static const struct rte_pci_id pci_id_zxdh_map[] = {
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_VF_DEVICEID)},
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_VF_DEVICEID)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+static struct rte_pci_driver zxdh_pmd = {
+ .id_table = pci_id_zxdh_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = zxdh_eth_pci_probe,
+ .remove = zxdh_eth_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
new file mode 100644
index 0000000000..93375aea11
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_ETHDEV_H
+#define ZXDH_ETHDEV_H
+
+#include "ethdev_driver.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* ZXDH PCI vendor/device ID. */
+#define ZXDH_PCI_VENDOR_ID 0x1cf2
+
+#define ZXDH_E310_PF_DEVICEID 0x8061
+#define ZXDH_E310_VF_DEVICEID 0x8062
+#define ZXDH_E312_PF_DEVICEID 0x8049
+#define ZXDH_E312_VF_DEVICEID 0x8060
+
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+#define ZXDH_NUM_BARS 2
+
+struct zxdh_hw {
+ struct rte_eth_dev *eth_dev;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+
+ uint32_t speed;
+ uint16_t device_id;
+ uint16_t port_id;
+
+ uint8_t duplex;
+ uint8_t is_pf;
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_ETHDEV_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 16486 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 2/9] net/zxdh: add logging implementation
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
` (6 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3417 bytes --]
Add zxdh logging implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++--
drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++
2 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_logs.h
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 5b6c9ec1bf..c911284423 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -7,6 +7,7 @@
#include <rte_ethdev.h>
#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL)
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
return -ENOMEM;
+ }
memset(hw, 0, sizeof(*hw));
hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
- if (hw->bar_addr[0] == 0)
+ if (hw->bar_addr[0] == 0) {
+ PMD_INIT_LOG(ERR, "Bad mem resource.");
return -EIO;
+ }
hw->device_id = pci_dev->id.device_id;
hw->port_id = eth_dev->data->port_id;
@@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = {
RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE);
diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h
new file mode 100644
index 0000000000..a8a6a3135b
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_logs.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_LOGS_H
+#define ZXDH_LOGS_H
+
+#include <rte_log.h>
+
+extern int zxdh_logtype_init;
+#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init
+#define PMD_INIT_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_driver;
+#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_rx;
+#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx
+#define PMD_RX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_tx;
+#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx
+#define PMD_TX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_msg;
+#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg
+#define PMD_MSG_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+#endif /* ZXDH_LOGS_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6146 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-30 9:01 ` [PATCH v8 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 14:55 ` David Marchand
2024-10-30 9:01 ` [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
` (5 subsequent siblings)
8 siblings, 1 reply; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 22959 bytes --]
Add device pci init implementation,
to obtain PCI capability and read configuration, etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 43 +++++
drivers/net/zxdh/zxdh_ethdev.h | 21 ++-
drivers/net/zxdh/zxdh_pci.c | 285 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++
drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++
7 files changed, 655 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 932fb1c835..7db4e7bc71 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -15,4 +15,5 @@ endif
sources = files(
'zxdh_ethdev.c',
+ 'zxdh_pci.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index c911284423..8877855965 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -8,6 +8,40 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+
+struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+
+static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
+{
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = 0;
+
+ ret = zxdh_read_pci_caps(pci_dev, hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id);
+ goto err;
+ }
+
+ zxdh_hw_internal[hw->port_id].zxdh_vtpci_ops = &zxdh_dev_pci_ops;
+ zxdh_vtpci_reset(hw);
+ zxdh_get_pci_dev_config(hw);
+
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]);
+
+ /* If host does not support both status and MSI-X then disable LSC */
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE)
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+ else
+ eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+
+ return 0;
+
+err:
+ PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
+ return ret;
+}
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
hw->is_pf = 1;
}
+ ret = zxdh_init_device(eth_dev);
+ if (ret < 0)
+ goto err_zxdh_init;
+
+ return ret;
+
+err_zxdh_init:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
return ret;
}
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 93375aea11..8be5af6aeb 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -5,6 +5,8 @@
#ifndef ZXDH_ETHDEV_H
#define ZXDH_ETHDEV_H
+#include <rte_ether.h>
+
#include "ethdev_driver.h"
#ifdef __cplusplus
@@ -24,15 +26,30 @@ extern "C" {
#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
#define ZXDH_NUM_BARS 2
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
- uint64_t bar_addr[ZXDH_NUM_BARS];
+ struct zxdh_pci_common_cfg *common_cfg;
+ struct zxdh_net_config *dev_cfg;
- uint32_t speed;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+ uint64_t host_features;
+ uint64_t guest_features;
+ uint32_t max_queue_pairs;
+ uint32_t speed;
+ uint32_t notify_off_multiplier;
+ uint16_t *notify_base;
+ uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint8_t *isr;
+ uint8_t weak_barriers;
+ uint8_t use_msix;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+
uint8_t duplex;
uint8_t is_pf;
};
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
new file mode 100644
index 0000000000..8fcab6e888
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -0,0 +1,285 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <unistd.h>
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+
+#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA | \
+ 1ULL << ZXDH_NET_F_MAC)
+
+static void zxdh_read_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ void *dst,
+ int32_t length)
+{
+ int32_t i = 0;
+ uint8_t *p = NULL;
+ uint8_t old_gen = 0;
+ uint8_t new_gen = 0;
+
+ do {
+ old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+ p = dst;
+ for (i = 0; i < length; i++)
+ *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+ new_gen = rte_read8(&hw->common_cfg->config_generation);
+ } while (old_gen != new_gen);
+}
+
+static void zxdh_write_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ const void *src,
+ int32_t length)
+{
+ int32_t i = 0;
+ const uint8_t *p = src;
+
+ for (i = 0; i < length; i++)
+ rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint8_t zxdh_get_status(struct zxdh_hw *hw)
+{
+ return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static uint64_t zxdh_get_features(struct zxdh_hw *hw)
+{
+ uint32_t features_lo = 0;
+ uint32_t features_hi = 0;
+
+ rte_write32(0, &hw->common_cfg->device_feature_select);
+ features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+ rte_write32(1, &hw->common_cfg->device_feature_select);
+ features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+ return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
+{
+ rte_write32(0, &hw->common_cfg->guest_feature_select);
+ rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature);
+ rte_write32(1, &hw->common_cfg->guest_feature_select);
+ rte_write32(features >> 32, &hw->common_cfg->guest_feature);
+}
+
+const struct zxdh_pci_ops zxdh_dev_pci_ops = {
+ .read_dev_cfg = zxdh_read_dev_config,
+ .write_dev_cfg = zxdh_write_dev_config,
+ .get_status = zxdh_get_status,
+ .set_status = zxdh_set_status,
+ .get_features = zxdh_get_features,
+ .set_features = zxdh_set_features,
+};
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
+{
+ return ZXDH_VTPCI_OPS(hw)->get_features(hw);
+}
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw)
+{
+ PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id);
+ uint32_t retry = 0;
+
+ ZXDH_VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET);
+ /* Flush status write and wait device ready max 3 seconds. */
+ while (ZXDH_VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) {
+ ++retry;
+ rte_delay_ms(1);
+ }
+ PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
+}
+
+static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
+{
+ uint8_t bar = cap->bar;
+ uint32_t length = cap->length;
+ uint32_t offset = cap->offset;
+
+ if (bar >= PCI_MAX_RESOURCE) {
+ PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+ return NULL;
+ }
+ if (offset + length < offset) {
+ PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length);
+ return NULL;
+ }
+ if (offset + length > dev->mem_resource[bar].len) {
+ PMD_INIT_LOG(ERR, "invalid cap: overflows bar space");
+ return NULL;
+ }
+ uint8_t *base = dev->mem_resource[bar].addr;
+
+ if (base == NULL) {
+ PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+ return NULL;
+ }
+ return base + offset;
+}
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw)
+{
+ uint8_t pos = 0;
+ int32_t ret = 0;
+
+ if (dev->mem_resource[0].addr == NULL) {
+ PMD_INIT_LOG(ERR, "bar0 base addr is NULL");
+ return -1;
+ }
+
+ ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret);
+ return -1;
+ }
+ while (pos) {
+ struct zxdh_pci_cap cap;
+
+ ret = rte_pci_read_config(dev, &cap, 2, pos);
+ if (ret != 2) {
+ PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap.cap_vndr == ZXDH_PCI_CAP_ID_MSIX) {
+ /**
+ * Transitional devices would also have this capability,
+ * that's why we also check if msix is enabled.
+ * 1st byte is cap ID; 2nd byte is the position of next cap;
+ * next two bytes are the flags.
+ */
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2);
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d",
+ pos + 2, ret);
+ break;
+ }
+ hw->use_msix = (flags & ZXDH_PCI_MSIX_ENABLE) ?
+ ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED;
+ }
+ if (cap.cap_vndr != ZXDH_PCI_CAP_ID_VNDR) {
+ PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x",
+ pos, cap.cap_vndr);
+ goto next;
+ }
+ ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+ pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+ switch (cap.cfg_type) {
+ case ZXDH_PCI_CAP_COMMON_CFG:
+ hw->common_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_NOTIFY_CFG: {
+ ret = rte_pci_read_config(dev, &hw->notify_off_multiplier,
+ 4, pos + sizeof(cap));
+ if (ret != 4)
+ PMD_INIT_LOG(ERR,
+ "failed to read notify_off_multiplier, ret %d", ret);
+ else
+ hw->notify_base = get_cfg_addr(dev, &cap);
+ break;
+ }
+ case ZXDH_PCI_CAP_DEVICE_CFG:
+ hw->dev_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_ISR_CFG:
+ hw->isr = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_PCI_CFG: {
+ hw->pcie_id = *(uint16_t *)&cap.padding[1];
+ PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id);
+ uint16_t pcie_id = hw->pcie_id;
+
+ if ((pcie_id >> 11) & 0x1) /* PF */ {
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7);
+ } else { /* VF */
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u",
+ pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff);
+ }
+ break;
+ }
+ }
+next:
+ pos = cap.cap_next;
+ }
+ if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+ hw->dev_cfg == NULL || hw->isr == NULL) {
+ PMD_INIT_LOG(ERR, "no zxdh pci device found.");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length)
+{
+ ZXDH_VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
+{
+ uint64_t guest_features = 0;
+ uint64_t nego_features = 0;
+ uint32_t max_queue_pairs = 0;
+
+ hw->host_features = zxdh_vtpci_get_features(hw);
+
+ guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
+ nego_features = guest_features & hw->host_features;
+
+ hw->guest_features = nego_features;
+
+ if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
+ &hw->mac_addr, RTE_ETHER_ADDR_LEN);
+ } else {
+ rte_eth_random_addr(&hw->mac_addr[0]);
+ }
+
+ zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
+ &max_queue_pairs, sizeof(max_queue_pairs));
+
+ if (max_queue_pairs == 0)
+ hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
+ else
+ hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
+ PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs);
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
new file mode 100644
index 0000000000..bb5ae64ddf
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_PCI_H
+#define ZXDH_PCI_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#include <bus_pci_driver.h>
+
+#include "zxdh_ethdev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum zxdh_msix_status {
+ ZXDH_MSIX_NONE = 0,
+ ZXDH_MSIX_DISABLED = 1,
+ ZXDH_MSIX_ENABLED = 2
+};
+
+#define ZXDH_PCI_CAPABILITY_LIST 0x34
+#define ZXDH_PCI_CAP_ID_VNDR 0x09
+#define ZXDH_PCI_CAP_ID_MSIX 0x11
+
+#define ZXDH_PCI_MSIX_ENABLE 0x8000
+
+#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
+#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
+#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
+#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */
+#define ZXDH_F_VERSION_1 32
+#define ZXDH_F_RING_PACKED 34
+#define ZXDH_F_IN_ORDER 35
+#define ZXDH_F_NOTIFICATION_DATA 38
+
+#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */
+#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */
+#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */
+#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */
+#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */
+
+/* Status byte for guest to report progress. */
+#define ZXDH_CONFIG_STATUS_RESET 0x00
+#define ZXDH_CONFIG_STATUS_ACK 0x01
+#define ZXDH_CONFIG_STATUS_DRIVER 0x02
+#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04
+#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
+#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
+#define ZXDH_CONFIG_STATUS_FAILED 0x80
+
+struct zxdh_net_config {
+ /* The config defining mac address (if ZXDH_NET_F_MAC) */
+ uint8_t mac[RTE_ETHER_ADDR_LEN];
+ /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */
+ uint16_t status;
+ uint16_t max_virtqueue_pairs;
+ uint16_t mtu;
+ /*
+ * speed, in units of 1Mb. All values 0 to INT_MAX are legal.
+ * Any other value stands for unknown.
+ */
+ uint32_t speed;
+ /* 0x00 - half duplex
+ * 0x01 - full duplex
+ * Any other value stands for unknown.
+ */
+ uint8_t duplex;
+} __rte_packed;
+
+/* This is the PCI capability header: */
+struct zxdh_pci_cap {
+ uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
+ uint8_t cap_next; /* Generic PCI field: next ptr. */
+ uint8_t cap_len; /* Generic PCI field: capability length */
+ uint8_t cfg_type; /* Identifies the structure. */
+ uint8_t bar; /* Where to find it. */
+ uint8_t padding[3]; /* Pad to full dword. */
+ uint32_t offset; /* Offset within bar. */
+ uint32_t length; /* Length of the structure, in bytes. */
+};
+
+/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */
+struct zxdh_pci_common_cfg {
+ /* About the whole device. */
+ uint32_t device_feature_select; /* read-write */
+ uint32_t device_feature; /* read-only */
+ uint32_t guest_feature_select; /* read-write */
+ uint32_t guest_feature; /* read-write */
+ uint16_t msix_config; /* read-write */
+ uint16_t num_queues; /* read-only */
+ uint8_t device_status; /* read-write */
+ uint8_t config_generation; /* read-only */
+
+ /* About a specific virtqueue. */
+ uint16_t queue_select; /* read-write */
+ uint16_t queue_size; /* read-write, power of 2. */
+ uint16_t queue_msix_vector; /* read-write */
+ uint16_t queue_enable; /* read-write */
+ uint16_t queue_notify_off; /* read-only */
+ uint32_t queue_desc_lo; /* read-write */
+ uint32_t queue_desc_hi; /* read-write */
+ uint32_t queue_avail_lo; /* read-write */
+ uint32_t queue_avail_hi; /* read-write */
+ uint32_t queue_used_lo; /* read-write */
+ uint32_t queue_used_hi; /* read-write */
+};
+
+static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
+{
+ return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+struct zxdh_pci_ops {
+ void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
+ void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
+
+ uint8_t (*get_status)(struct zxdh_hw *hw);
+ void (*set_status)(struct zxdh_hw *hw, uint8_t status);
+
+ uint64_t (*get_features)(struct zxdh_hw *hw);
+ void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+};
+
+struct zxdh_hw_internal {
+ const struct zxdh_pci_ops *zxdh_vtpci_ops;
+};
+
+#define ZXDH_VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].zxdh_vtpci_ops)
+
+extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+extern const struct zxdh_pci_ops zxdh_dev_pci_ops;
+
+void zxdh_vtpci_reset(struct zxdh_hw *hw);
+void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset,
+ void *dst, int32_t length);
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
+
+uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_PCI_H */
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
new file mode 100644
index 0000000000..fd73f14e2d
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_QUEUE_H
+#define ZXDH_QUEUE_H
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_rxtx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ **/
+struct zxdh_vring_desc {
+ uint64_t addr; /* Address (guest-physical). */
+ uint32_t len; /* Length. */
+ uint16_t flags; /* The flags as indicated above. */
+ uint16_t next; /* We chain unused descriptors via this. */
+} __rte_packed;
+
+struct zxdh_vring_avail {
+ uint16_t flags;
+ uint16_t idx;
+ uint16_t ring[];
+} __rte_packed;
+
+struct zxdh_vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+} __rte_packed;
+
+struct zxdh_vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+} __rte_packed;
+
+struct zxdh_vring_packed {
+ uint32_t num;
+ struct zxdh_vring_packed_desc *desc;
+ struct zxdh_vring_packed_desc_event *driver;
+ struct zxdh_vring_packed_desc_event *device;
+} __rte_packed;
+
+struct zxdh_vq_desc_extra {
+ void *cookie;
+ uint16_t ndescs;
+ uint16_t next;
+} __rte_packed;
+
+struct zxdh_virtqueue {
+ struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */
+ struct {
+ /**< vring keeping descs and events */
+ struct zxdh_vring_packed ring;
+ uint8_t used_wrap_counter;
+ uint8_t rsv;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ uint16_t rsv1;
+ } __rte_packed vq_packed;
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
+ uint16_t vq_nentries; /**< vring desc numbers */
+ uint16_t vq_free_cnt; /**< num of desc available */
+ uint16_t vq_avail_idx; /**< sync until needed */
+ uint16_t vq_free_thresh; /**< free threshold */
+ uint16_t rsv2;
+
+ void *vq_ring_virt_mem; /**< linear address of vring*/
+ uint32_t vq_ring_size;
+
+ union {
+ struct zxdh_virtnet_rx rxq;
+ struct zxdh_virtnet_tx txq;
+ };
+
+ /** < physical address of vring,
+ * or virtual address for virtio_user.
+ **/
+ rte_iova_t vq_ring_mem;
+
+ /**
+ * Head of the free chain in the descriptor table. If
+ * there are no free descriptors, this will be set to
+ * VQ_RING_DESC_CHAIN_END.
+ **/
+ uint16_t vq_desc_head_idx;
+ uint16_t vq_desc_tail_idx;
+ uint16_t vq_queue_index; /**< PCI queue index */
+ uint16_t offset; /**< relative offset to obtain addr in mbuf */
+ uint16_t *notify_addr;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct zxdh_vq_desc_extra vq_descx[];
+} __rte_packed;
+
+#endif /* ZXDH_QUEUE_H */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
new file mode 100644
index 0000000000..ccac7e7834
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_RXTX_H
+#define ZXDH_RXTX_H
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf_core.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct zxdh_virtnet_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t errors;
+ uint64_t multicast;
+ uint64_t broadcast;
+ uint64_t truncated_err;
+ uint64_t size_bins[8];
+};
+
+struct zxdh_virtnet_rx {
+ struct zxdh_virtqueue *vq;
+
+ /* dummy mbuf, for wraparound when processing RX ring. */
+ struct rte_mbuf fake_mbuf;
+
+ uint64_t mbuf_initializer; /* value to init mbufs. */
+ struct rte_mempool *mpool; /* mempool for mbuf allocation */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct zxdh_virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate RX ring. */
+} __rte_packed;
+
+struct zxdh_virtnet_tx {
+ struct zxdh_virtqueue *vq;
+ const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct zxdh_virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate TX ring. */
+} __rte_packed;
+
+#endif /* ZXDH_RXTX_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53777 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (2 preceding siblings ...)
2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
` (4 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 9083 bytes --]
Add msg channel and hwlock init implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 15 +++
drivers/net/zxdh/zxdh_ethdev.h | 1 +
drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 67 ++++++++++++++
5 files changed, 245 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 7db4e7bc71..2e0c8fddae 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -16,4 +16,5 @@ endif
sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
+ 'zxdh_msg.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 8877855965..2dcf144fc9 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -9,6 +9,7 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
#include "zxdh_pci.h"
+#include "zxdh_msg.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret < 0)
goto err_zxdh_init;
+ ret = zxdh_msg_chan_init();
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to init bar msg chan");
+ goto err_zxdh_init;
+ }
+ hw->msg_chan_init = 1;
+
+ ret = zxdh_msg_chan_hwlock_init(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
+ zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
return ret;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 8be5af6aeb..5902704923 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -52,6 +52,7 @@ struct zxdh_hw {
uint8_t duplex;
uint8_t is_pf;
+ uint8_t msg_chan_init;
};
#ifdef __cplusplus
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
new file mode 100644
index 0000000000..9dcf99f1f7
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_spinlock.h>
+#include <rte_cycles.h>
+#include <inttypes.h>
+#include <rte_malloc.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+
+#define ZXDH_REPS_INFO_FLAG_USABLE 0x00
+#define ZXDH_BAR_SEQID_NUM_MAX 256
+
+#define ZXDH_PCIEID_IS_PF_MASK (0x0800)
+#define ZXDH_PCIEID_PF_IDX_MASK (0x0700)
+#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff)
+#define ZXDH_PCIEID_EP_IDX_MASK (0x7000)
+/* PCIEID bit field offset */
+#define ZXDH_PCIEID_PF_IDX_OFFSET (8)
+#define ZXDH_PCIEID_EP_IDX_OFFSET (12)
+
+#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3)
+#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5)
+#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8)
+
+#define ZXDH_MAX_EP_NUM (4)
+#define ZXDH_MAX_HARD_SPINLOCK_NUM (511)
+
+#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
+#define ZXDH_FW_SHRD_OFFSET (0x5000)
+#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define ZXDH_HW_LABEL_OFFSET \
+ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT)
+
+struct zxdh_dev_stat {
+ bool is_mpf_scanned;
+ bool is_res_init;
+ int16_t dev_cnt; /* probe cnt */
+};
+struct zxdh_dev_stat g_dev_stat = {0};
+
+struct zxdh_seqid_item {
+ void *reps_addr;
+ uint16_t id;
+ uint16_t buffer_len;
+ uint16_t flag;
+};
+
+struct zxdh_seqid_ring {
+ uint16_t cur_id;
+ rte_spinlock_t lock;
+ struct zxdh_seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX];
+};
+struct zxdh_seqid_ring g_seqid_ring = {0};
+
+static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+{
+ uint16_t lock_id = 0;
+ uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET;
+ uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET;
+
+ switch (dst) {
+ /* msg to risc */
+ case ZXDH_MSG_CHAN_END_RISC:
+ lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx;
+ break;
+ /* msg to pf/vf */
+ case ZXDH_MSG_CHAN_END_VF:
+ case ZXDH_MSG_CHAN_END_PF:
+ lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx +
+ ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM);
+ break;
+ default:
+ lock_id = 0;
+ break;
+ }
+ if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM)
+ lock_id = 0;
+
+ return lock_id;
+}
+
+static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value)
+{
+ *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value;
+}
+
+static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data)
+{
+ *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
+}
+
+static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
+{
+ label_write((uint64_t)label_addr, virt_lock_id, 0);
+ spinlock_write(virt_addr, virt_lock_id, 0);
+ return 0;
+}
+
+/**
+ * Fun: PF init hard_spinlock addr
+ */
+static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
+{
+ int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
+
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + ZXDH_HW_LABEL_OFFSET);
+ lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + ZXDH_HW_LABEL_OFFSET);
+ return 0;
+}
+
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->is_pf)
+ return 0;
+ return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
+}
+
+static rte_spinlock_t chan_lock;
+int zxdh_msg_chan_init(void)
+{
+ uint16_t seq_id = 0;
+
+ g_dev_stat.dev_cnt++;
+ if (g_dev_stat.is_res_init)
+ return ZXDH_BAR_MSG_OK;
+
+ rte_spinlock_init(&chan_lock);
+ g_seqid_ring.cur_id = 0;
+ rte_spinlock_init(&g_seqid_ring.lock);
+
+ for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) {
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id];
+
+ reps_info->id = seq_id;
+ reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE;
+ }
+ g_dev_stat.is_res_init = true;
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_msg_chan_exit(void)
+{
+ if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0))
+ return ZXDH_BAR_MSG_OK;
+
+ g_dev_stat.is_res_init = false;
+ return ZXDH_BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
new file mode 100644
index 0000000000..a0b46c900a
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_MSG_H
+#define ZXDH_MSG_H
+
+#include <stdint.h>
+
+#include <ethdev_driver.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define ZXDH_BAR0_INDEX 0
+
+enum ZXDH_DRIVER_TYPE {
+ ZXDH_MSG_CHAN_END_MPF = 0,
+ ZXDH_MSG_CHAN_END_PF,
+ ZXDH_MSG_CHAN_END_VF,
+ ZXDH_MSG_CHAN_END_RISC,
+};
+
+enum ZXDH_BAR_MSG_RTN {
+ ZXDH_BAR_MSG_OK = 0,
+ ZXDH_BAR_MSG_ERR_MSGID,
+ ZXDH_BAR_MSG_ERR_NULL,
+ ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */
+ ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */
+ ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */
+ ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */
+ ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
+ ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
+ ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
+ ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
+ ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
+ /**
+ * The sending interface parameter boundary structure pointer is empty
+ */
+ ZXDH_BAR_MSG_ERR_NULL_PARA,
+ ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
+ /**
+ * Unable to find the corresponding message processing function for this module
+ */
+ ZXDH_BAR_MSG_ERR_MODULE_NOEXIST,
+ /**
+ * The virtual address in the parameters passed in by the sending interface is empty
+ */
+ ZXDH_BAR_MSG_ERR_VIRTADDR_NULL,
+ ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */
+ ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED,
+ ZXDH_BAR_MSG_ERR_KERNEL_READY,
+ ZXDH_BAR_MSG_ERR_USR_RET_ERR,
+ ZXDH_BAR_MSG_ERR_ERR_PCIEID,
+ ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */
+};
+
+int zxdh_msg_chan_init(void);
+int zxdh_bar_msg_chan_exit(void);
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_MSG_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 17557 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 5/9] net/zxdh: add msg chan enable implementation
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (3 preceding siblings ...)
2024-10-30 9:01 ` [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
` (3 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 28128 bytes --]
Add msg chan enable implementation to support
send msg to backend(device side) get infos.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 6 +
drivers/net/zxdh/zxdh_ethdev.h | 12 +
drivers/net/zxdh/zxdh_msg.c | 641 ++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_msg.h | 129 +++++++
4 files changed, 785 insertions(+), 3 deletions(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 2dcf144fc9..a729344288 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_msg_chan_enable(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 5902704923..bed1334690 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -29,10 +29,22 @@ extern "C" {
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+union zxdh_virport_num {
+ uint16_t vport;
+ struct {
+ uint16_t vfid:8;
+ uint16_t pfid:3;
+ uint16_t vf_flag:1;
+ uint16_t epid:3;
+ uint16_t direct_flag:1;
+ };
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ union zxdh_virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
uint64_t host_features;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 9dcf99f1f7..1bf72a9b7c 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -8,7 +8,6 @@
#include <rte_memcpy.h>
#include <rte_spinlock.h>
#include <rte_cycles.h>
-#include <inttypes.h>
#include <rte_malloc.h>
#include "zxdh_ethdev.h"
@@ -17,6 +16,7 @@
#define ZXDH_REPS_INFO_FLAG_USABLE 0x00
#define ZXDH_BAR_SEQID_NUM_MAX 256
+#define ZXDH_REPS_INFO_FLAG_USED 0xa0
#define ZXDH_PCIEID_IS_PF_MASK (0x0800)
#define ZXDH_PCIEID_PF_IDX_MASK (0x0700)
@@ -33,15 +33,88 @@
#define ZXDH_MAX_EP_NUM (4)
#define ZXDH_MAX_HARD_SPINLOCK_NUM (511)
+#define LOCK_PRIMARY_ID_MASK (0x8000)
+/* bar offset */
+#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000)
+#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000)
#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
#define ZXDH_FW_SHRD_OFFSET (0x5000)
#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
#define ZXDH_HW_LABEL_OFFSET \
(ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT)
+#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \
+ (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET)
+#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \
+ (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET)
+#define ZXDH_CHAN_RISC_LABEL_OFFSET \
+ (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET)
+#define ZXDH_CHAN_PFVF_LABEL_OFFSET \
+ (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET)
+
+#define ZXDH_REPS_HEADER_LEN_OFFSET 1
+#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4
+#define ZXDH_REPS_HEADER_REPLYED 0xff
+
+#define ZXDH_BAR_MSG_CHAN_USABLE 0
+#define ZXDH_BAR_MSG_CHAN_USED 1
+
+#define ZXDH_BAR_MSG_POL_MASK (0x10)
+#define ZXDH_BAR_MSG_POL_OFFSET (4)
+
+#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc
+#define ZXDH_BAR_MSG_VALID_MASK 1
+#define ZXDH_BAR_MSG_VALID_OFFSET 0
+
+#define ZXDH_BAR_PF_NUM 7
+#define ZXDH_BAR_VF_NUM 256
+#define ZXDH_BAR_INDEX_PF_TO_VF 0
+#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff
+#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0
+#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0
+
+#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000)
+#define ZXDH_SPINLOCK_POLLING_SPAN_US (100)
+
+#define ZXDH_BAR_MSG_SRC_NUM 3
+#define ZXDH_BAR_MSG_SRC_MPF 0
+#define ZXDH_BAR_MSG_SRC_PF 1
+#define ZXDH_BAR_MSG_SRC_VF 2
+#define ZXDH_BAR_MSG_SRC_ERR 0xff
+#define ZXDH_BAR_MSG_DST_NUM 3
+#define ZXDH_BAR_MSG_DST_RISC 0
+#define ZXDH_BAR_MSG_DST_MPF 2
+#define ZXDH_BAR_MSG_DST_PFVF 1
+#define ZXDH_BAR_MSG_DST_ERR 0xff
+
+#define ZXDH_LOCK_TYPE_HARD (1)
+#define ZXDH_LOCK_TYPE_SOFT (0)
+#define ZXDH_BAR_INDEX_TO_RISC 0
+
+#define ZXDH_BAR_CHAN_INDEX_SEND 0
+#define ZXDH_BAR_CHAN_INDEX_RECV 1
+
+uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND},
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV},
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV}
+};
+
+uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF},
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF},
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}
+};
+
+uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD},
+ {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD},
+ {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}
+};
+
struct zxdh_dev_stat {
- bool is_mpf_scanned;
- bool is_res_init;
+ uint8_t is_mpf_scanned;
+ uint8_t is_res_init;
int16_t dev_cnt; /* probe cnt */
};
struct zxdh_dev_stat g_dev_stat = {0};
@@ -97,6 +170,33 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da
*(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
}
+static uint8_t spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id)
+{
+ return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id);
+}
+
+static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
+ uint64_t label_addr, uint16_t primary_id)
+{
+ uint32_t lock_rd_cnt = 0;
+
+ do {
+ /* read to lock */
+ uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id);
+
+ if (spl_val == 0) {
+ label_write((uint64_t)label_addr, virt_lock_id, primary_id);
+ break;
+ }
+ rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US);
+ lock_rd_cnt++;
+ } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES);
+ if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES)
+ return -1;
+
+ return 0;
+}
+
static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
{
label_write((uint64_t)label_addr, virt_lock_id, 0);
@@ -159,3 +259,538 @@ int zxdh_bar_msg_chan_exit(void)
g_dev_stat.is_res_init = false;
return ZXDH_BAR_MSG_OK;
}
+
+static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid)
+{
+ struct zxdh_seqid_item *seqid_reps_info = NULL;
+
+ rte_spinlock_lock(&g_seqid_ring.lock);
+ uint16_t g_id = g_seqid_ring.cur_id;
+ uint16_t count = 0;
+ int rc = 0;
+
+ do {
+ count++;
+ ++g_id;
+ g_id %= ZXDH_BAR_SEQID_NUM_MAX;
+ seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id];
+ } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) &&
+ (count < ZXDH_BAR_SEQID_NUM_MAX));
+
+ if (count >= ZXDH_BAR_SEQID_NUM_MAX) {
+ rc = -1;
+ goto out;
+ }
+ seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED;
+ g_seqid_ring.cur_id = g_id;
+ *msgid = g_id;
+ rc = ZXDH_BAR_MSG_OK;
+
+out:
+ rte_spinlock_unlock(&g_seqid_ring.lock);
+ return rc;
+}
+
+static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id)
+{
+ int ret = zxdh_bar_chan_msgid_allocate(msg_id);
+
+ if (ret != ZXDH_BAR_MSG_OK)
+ return ZXDH_BAR_MSG_ERR_MSGID;
+
+ PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id);
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id];
+
+ reps_info->reps_addr = result->recv_buffer;
+ reps_info->buffer_len = result->buffer_len;
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src)
+{
+ uint8_t src_index = 0;
+
+ switch (src) {
+ case ZXDH_MSG_CHAN_END_MPF:
+ src_index = ZXDH_BAR_MSG_SRC_MPF;
+ break;
+ case ZXDH_MSG_CHAN_END_PF:
+ src_index = ZXDH_BAR_MSG_SRC_PF;
+ break;
+ case ZXDH_MSG_CHAN_END_VF:
+ src_index = ZXDH_BAR_MSG_SRC_VF;
+ break;
+ default:
+ src_index = ZXDH_BAR_MSG_SRC_ERR;
+ break;
+ }
+ return src_index;
+}
+
+static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst)
+{
+ uint8_t dst_index = 0;
+
+ switch (dst) {
+ case ZXDH_MSG_CHAN_END_MPF:
+ dst_index = ZXDH_BAR_MSG_DST_MPF;
+ break;
+ case ZXDH_MSG_CHAN_END_PF:
+ dst_index = ZXDH_BAR_MSG_DST_PFVF;
+ break;
+ case ZXDH_MSG_CHAN_END_VF:
+ dst_index = ZXDH_BAR_MSG_DST_PFVF;
+ break;
+ case ZXDH_MSG_CHAN_END_RISC:
+ dst_index = ZXDH_BAR_MSG_DST_RISC;
+ break;
+ default:
+ dst_index = ZXDH_BAR_MSG_SRC_ERR;
+ break;
+ }
+ return dst_index;
+}
+
+static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result)
+{
+ uint8_t src_index = 0;
+ uint8_t dst_index = 0;
+
+ if (in == NULL || result == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null para.");
+ return ZXDH_BAR_MSG_ERR_NULL_PARA;
+ }
+ src_index = zxdh_bar_msg_src_index_trans(in->src);
+ dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+ if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ if (in->payload_addr == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null message.");
+ return ZXDH_BAR_MSG_ERR_BODY_NULL;
+ }
+ if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len);
+ return ZXDH_BAR_MSG_ERR_LEN;
+ }
+ if (in->virt_addr == 0 || result->recv_buffer == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL.");
+ return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL;
+ }
+ if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET)
+ PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes");
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id)
+{
+ return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL;
+}
+
+static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+ uint16_t chan_id = chan_id_tbl[src_index][dst_index];
+ uint16_t subchan_id = subchan_id_tbl[src_index][dst_index];
+
+ *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == ZXDH_MSG_CHAN_END_RISC)
+ ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+ else
+ ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET,
+ src_pcieid | LOCK_PRIMARY_ID_MASK);
+
+ return ret;
+}
+
+static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == ZXDH_MSG_CHAN_END_RISC)
+ zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET);
+ else
+ zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET);
+}
+
+static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+
+ ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr);
+ if (ret != 0)
+ PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid);
+
+ return ret;
+}
+
+static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+
+ zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr);
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static void zxdh_bar_chan_msgid_free(uint16_t msg_id)
+{
+ struct zxdh_seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ rte_spinlock_lock(&g_seqid_ring.lock);
+ seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE;
+ PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id);
+ rte_spinlock_unlock(&g_seqid_ring.lock);
+}
+
+static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data)
+{
+ uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *(uint32_t *)(subchan_addr + algin_offset) = data;
+ return 0;
+}
+
+static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata)
+{
+ uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *pdata = *(uint32_t *)(subchan_addr + algin_offset);
+ return 0;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr,
+ struct zxdh_bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx));
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr,
+ struct zxdh_bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx);
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * ix +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ for (ix = 0; ix < remain; ix++)
+ remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix);
+
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * count +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data);
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * ix +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * count +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data);
+ for (ix = 0; ix < remain; ix++)
+ *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix);
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ data &= (~ZXDH_BAR_MSG_VALID_MASK);
+ data |= (uint32_t)valid_label;
+ zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint8_t temp_msg[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
+static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr,
+ void *payload_addr,
+ uint16_t payload_len,
+ struct zxdh_bar_msg_header *msg_header)
+{
+ uint16_t ret = 0;
+ ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header);
+
+ ret = zxdh_bar_chan_msg_header_get(subchan_addr,
+ (struct zxdh_bar_msg_header *)temp_msg);
+
+ ret = zxdh_bar_chan_msg_payload_set(subchan_addr,
+ (uint8_t *)(payload_addr), payload_len);
+
+ ret = zxdh_bar_chan_msg_payload_get(subchan_addr,
+ temp_msg, payload_len);
+
+ ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED);
+ return ret;
+}
+
+static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK))
+ return ZXDH_BAR_MSG_CHAN_USABLE;
+
+ return ZXDH_BAR_MSG_CHAN_USED;
+}
+
+static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK);
+ data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET);
+ zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr,
+ uint64_t recv_buffer, uint16_t buffer_len)
+{
+ struct zxdh_bar_msg_header msg_header = {0};
+ uint16_t msg_id = 0;
+ uint16_t msg_len = 0;
+
+ zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header);
+ msg_id = msg_header.msg_id;
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id);
+ return ZXDH_BAR_MSG_ERR_REPLY;
+ }
+ msg_len = msg_header.len;
+
+ if (msg_len > buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u",
+ buffer_len, msg_len + 4);
+ return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN;
+ }
+ uint8_t *recv_msg = (uint8_t *)recv_buffer;
+
+ zxdh_bar_chan_msg_payload_get(subchan_addr,
+ recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len);
+ *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len;
+ *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result)
+{
+ struct zxdh_bar_msg_header msg_header = {0};
+ uint16_t seq_id = 0;
+ uint64_t subchan_addr = 0;
+ uint32_t time_out_cnt = 0;
+ uint16_t valid = 0;
+ int ret = 0;
+
+ ret = zxdh_bar_chan_send_para_check(in, result);
+ if (ret != ZXDH_BAR_MSG_OK)
+ goto exit;
+
+ ret = zxdh_bar_chan_save_recv_info(result, &seq_id);
+ if (ret != ZXDH_BAR_MSG_OK)
+ goto exit;
+
+ zxdh_bar_chan_subchan_addr_get(in, &subchan_addr);
+
+ msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC;
+ msg_header.emec = in->emec;
+ msg_header.usr = 0;
+ msg_header.rsv = 0;
+ msg_header.module_id = in->module_id;
+ msg_header.len = in->payload_len;
+ msg_header.msg_id = seq_id;
+ msg_header.src_pcieid = in->src_pcieid;
+ msg_header.dst_pcieid = in->dst_pcieid;
+
+ ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+ if (ret != ZXDH_BAR_MSG_OK) {
+ zxdh_bar_chan_msgid_free(seq_id);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header);
+
+ do {
+ rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN);
+ valid = zxdh_bar_msg_valid_stat_get(subchan_addr);
+ ++time_out_cnt;
+ } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED));
+
+ if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) {
+ zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ zxdh_bar_chan_msg_poltag_set(subchan_addr, 0);
+ PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out.");
+ ret = ZXDH_BAR_MSG_ERR_TIME_OUT;
+ } else {
+ ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr,
+ (uint64_t)result->recv_buffer, result->buffer_len);
+ }
+ zxdh_bar_chan_msgid_free(seq_id);
+ zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+
+exit:
+ return ret;
+}
+
+static int bar_get_sum(uint8_t *ptr, uint8_t len)
+{
+ uint64_t sum = 0;
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ sum += *(ptr + idx);
+
+ return (uint16_t)sum;
+}
+
+static int zxdh_bar_chan_enable(struct zxdh_msix_para *para, uint16_t *vport)
+{
+ struct zxdh_bar_recv_msg recv_msg = {0};
+ int ret = 0;
+ int check_token = 0;
+ int sum_res = 0;
+
+ if (!para)
+ return ZXDH_BAR_MSG_ERR_NULL;
+
+ struct zxdh_msix_msg msix_msg = {
+ .pcie_id = para->pcie_id,
+ .vector_risc = para->vector_risc,
+ .vector_pfvf = para->vector_pfvf,
+ .vector_mpf = para->vector_mpf,
+ };
+ struct zxdh_pci_bar_msg in = {
+ .virt_addr = para->virt_addr,
+ .payload_addr = &msix_msg,
+ .payload_len = sizeof(msix_msg),
+ .emec = 0,
+ .src = para->driver_type,
+ .dst = ZXDH_MSG_CHAN_END_RISC,
+ .module_id = ZXDH_BAR_MODULE_MISX,
+ .src_pcieid = para->pcie_id,
+ .dst_pcieid = 0,
+ .usr = 0,
+ };
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+ if (ret != ZXDH_BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.msix_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token);
+ return ZXDH_BAR_MSG_ERR_REPLY;
+ }
+ *vport = recv_msg.msix_reps.vport;
+ PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id);
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct zxdh_msix_para misx_info = {
+ .vector_risc = ZXDH_MSIX_FROM_RISCV,
+ .vector_pfvf = ZXDH_MSIX_FROM_PFVF,
+ .vector_mpf = ZXDH_MSIX_FROM_MPF,
+ .pcie_id = hw->pcie_id,
+ .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF,
+ .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET),
+ };
+
+ return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index a0b46c900a..7fbab4b214 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -14,6 +14,21 @@ extern "C" {
#endif
#define ZXDH_BAR0_INDEX 0
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+
+#define ZXDH_BAR_MSG_POLLING_SPAN 100
+#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+
+#define ZXDH_BAR_CHAN_MSG_SYNC 0
+
+#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
+#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct zxdh_bar_msg_header))
+#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \
+ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header))
enum ZXDH_DRIVER_TYPE {
ZXDH_MSG_CHAN_END_MPF = 0,
@@ -22,6 +37,13 @@ enum ZXDH_DRIVER_TYPE {
ZXDH_MSG_CHAN_END_RISC,
};
+enum ZXDH_MSG_VEC {
+ ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
+ ZXDH_MSIX_FROM_MPF,
+ ZXDH_MSIX_FROM_RISCV,
+ ZXDH_MSG_VEC_NUM,
+};
+
enum ZXDH_BAR_MSG_RTN {
ZXDH_BAR_MSG_OK = 0,
ZXDH_BAR_MSG_ERR_MSGID,
@@ -56,10 +78,117 @@ enum ZXDH_BAR_MSG_RTN {
ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */
};
+enum zxdh_bar_module_id {
+ ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */
+ ZXDH_BAR_MODULE_TBL, /* 1: resource table */
+ ZXDH_BAR_MODULE_MISX, /* 2: config msix */
+ ZXDH_BAR_MODULE_SDA, /* 3: */
+ ZXDH_BAR_MODULE_RDMA, /* 4: */
+ ZXDH_BAR_MODULE_DEMO, /* 5: channel test */
+ ZXDH_BAR_MODULE_SMMU, /* 6: */
+ ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */
+ ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */
+ ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */
+ ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */
+ ZXDH_BAR_MODULE_VPORT, /* 11: get vport */
+ ZXDH_BAR_MODULE_BDF, /* 12: get bdf */
+ ZXDH_BAR_MODULE_RISC_READY, /* 13: */
+ ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */
+ ZXDH_BAR_MDOULE_NVME, /* 15: */
+ ZXDH_BAR_MDOULE_NPSDK, /* 16: */
+ ZXDH_BAR_MODULE_NP_TODO, /* 17: */
+ ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */
+ ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */
+
+ ZXDH_MODULE_FLASH = 32,
+ ZXDH_BAR_MODULE_OFFSET_GET = 33,
+ ZXDH_BAR_EVENT_OVS_WITH_VCB = 36,
+
+ ZXDH_BAR_MSG_MODULE_NUM = 100,
+};
+
+struct zxdh_msix_para {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+ uint64_t virt_addr;
+ uint16_t driver_type; /* refer to DRIVER_TYPE */
+};
+
+struct zxdh_msix_msg {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+};
+
+struct zxdh_pci_bar_msg {
+ uint64_t virt_addr; /* bar addr */
+ void *payload_addr;
+ uint16_t payload_len;
+ uint16_t emec;
+ uint16_t src; /* refer to BAR_DRIVER_TYPE */
+ uint16_t dst; /* refer to BAR_DRIVER_TYPE */
+ uint16_t module_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid;
+ uint16_t usr;
+};
+
+struct zxdh_bar_msix_reps {
+ uint16_t pcie_id;
+ uint16_t check;
+ uint16_t vport;
+ uint16_t rsv;
+} __rte_packed;
+
+struct zxdh_bar_offset_reps {
+ uint16_t check;
+ uint16_t rsv;
+ uint32_t offset;
+ uint32_t length;
+} __rte_packed;
+
+struct zxdh_bar_recv_msg {
+ uint8_t reps_ok;
+ uint16_t reps_len;
+ uint8_t rsv;
+ /* */
+ union {
+ struct zxdh_bar_msix_reps msix_reps;
+ struct zxdh_bar_offset_reps offset_reps;
+ } __rte_packed;
+} __rte_packed;
+
+struct zxdh_msg_recviver_mem {
+ void *recv_buffer; /* first 4B is head, followed by payload */
+ uint64_t buffer_len;
+};
+
+struct zxdh_bar_msg_header {
+ uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
+ uint8_t sync : 1;
+ uint8_t emec : 1; /* emergency */
+ uint8_t ack : 1; /* ack msg */
+ uint8_t poll : 1;
+ uint8_t usr : 1;
+ uint8_t rsv;
+ uint16_t module_id;
+ uint16_t len;
+ uint16_t msg_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid; /* used in PF-->VF */
+};
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result);
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 59853 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (4 preceding siblings ...)
2024-10-30 9:01 ` [PATCH v8 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
` (2 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 16207 bytes --]
Add zxdh get device backend infos,
use msg chan to send msg get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 250 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_common.h | 30 ++++
drivers/net/zxdh/zxdh_ethdev.c | 35 +++++
drivers/net/zxdh/zxdh_ethdev.h | 5 +
drivers/net/zxdh/zxdh_msg.c | 17 +--
drivers/net/zxdh/zxdh_msg.h | 21 +++
drivers/net/zxdh/zxdh_queue.h | 4 +
drivers/net/zxdh/zxdh_rxtx.h | 4 +
9 files changed, 359 insertions(+), 8 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 2e0c8fddae..a16db47f89 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -17,4 +17,5 @@ sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
'zxdh_msg.c',
+ 'zxdh_common.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
new file mode 100644
index 0000000000..0cb5380c5e
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <string.h>
+
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+#include "zxdh_common.h"
+
+#define ZXDH_MSG_RSP_SIZE_MAX 512
+
+#define ZXDH_COMMON_TABLE_READ 0
+#define ZXDH_COMMON_TABLE_WRITE 1
+
+#define ZXDH_COMMON_FIELD_PHYPORT 6
+
+#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2)
+
+#define ZXDH_REPS_HEADER_OFFSET 4
+#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa
+
+struct zxdh_common_msg {
+ uint8_t type; /* 0:read table 1:write table */
+ uint8_t field;
+ uint16_t pcie_id;
+ uint16_t slen; /* Data length for write table */
+ uint16_t reserved;
+} __rte_packed;
+
+struct zxdh_common_rsp_hdr {
+ uint8_t rsp_status;
+ uint16_t rsp_len;
+ uint8_t reserved;
+ uint8_t payload_status;
+ uint8_t rsv;
+ uint16_t payload_len;
+} __rte_packed;
+
+struct zxdh_tbl_msg_header {
+ uint8_t type; /* r/w */
+ uint8_t field;
+ uint16_t pcieid;
+ uint16_t slen;
+ uint16_t rsv;
+};
+struct zxdh_tbl_msg_reps_header {
+ uint8_t check;
+ uint8_t rsv;
+ uint16_t len;
+};
+
+static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ uint8_t type,
+ uint8_t field,
+ void *buff,
+ uint16_t buff_size)
+{
+ uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size;
+
+ desc->payload_addr = rte_zmalloc(NULL, msg_len, 0);
+ if (unlikely(desc->payload_addr == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate msg_data");
+ return -ENOMEM;
+ }
+ memset(desc->payload_addr, 0, msg_len);
+ desc->payload_len = msg_len;
+ struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr;
+
+ msg_data->type = type;
+ msg_data->field = field;
+ msg_data->pcie_id = hw->pcie_id;
+ msg_data->slen = buff_size;
+ if (buff_size != 0)
+ rte_memcpy(msg_data + 1, buff, buff_size);
+
+ return 0;
+}
+
+static int32_t zxdh_send_command(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ enum zxdh_bar_module_id module_id,
+ struct zxdh_msg_recviver_mem *msg_rsp)
+{
+ desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF;
+ desc->dst = ZXDH_MSG_CHAN_END_RISC;
+ desc->module_id = module_id;
+ desc->src_pcieid = hw->pcie_id;
+
+ msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX;
+ msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0);
+ if (unlikely(msg_rsp->recv_buffer == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate messages response");
+ return -ENOMEM;
+ }
+
+ if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
+ rte_free(msg_rsp->recv_buffer);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp,
+ void *buff, uint16_t len)
+{
+ struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer;
+
+ if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) {
+ PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d",
+ rsp_hdr->payload_status, rsp_hdr->payload_len);
+ return -1;
+ }
+ if (len != 0)
+ rte_memcpy(buff, rsp_hdr + 1, len);
+
+ return 0;
+}
+
+static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_msg_recviver_mem msg_rsp;
+ struct zxdh_pci_bar_msg desc;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT,
+ (void *)phyport, sizeof(*phyport));
+ return ret;
+}
+
+static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ param->pcie_id = hw->pcie_id;
+ param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
+ param->src_type = ZXDH_BAR_MODULE_TBL;
+}
+
+static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len)
+{
+ struct zxdh_pci_bar_msg in = {0};
+ uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0};
+ int ret = 0;
+
+ if (!res || !dev)
+ return ZXDH_BAR_MSG_ERR_NULL;
+
+ struct zxdh_tbl_msg_header tbl_msg = {
+ .type = ZXDH_TBL_TYPE_READ,
+ .field = field,
+ .pcieid = dev->pcie_id,
+ .slen = 0,
+ .rsv = 0,
+ };
+
+ in.virt_addr = dev->virt_addr;
+ in.payload_addr = &tbl_msg;
+ in.payload_len = sizeof(tbl_msg);
+ in.src = dev->src_type;
+ in.dst = ZXDH_MSG_CHAN_END_RISC;
+ in.module_id = ZXDH_BAR_MODULE_TBL;
+ in.src_pcieid = dev->pcie_id;
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = recv_buf,
+ .buffer_len = sizeof(recv_buf),
+ };
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != ZXDH_BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR,
+ "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ struct zxdh_tbl_msg_reps_header *tbl_reps =
+ (struct zxdh_tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET);
+
+ if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ *len = tbl_reps->len;
+ rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET +
+ sizeof(struct zxdh_tbl_msg_reps_header)), *len);
+ return ret;
+}
+
+static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK)
+ return -1;
+
+ *panel_id = reps;
+ return ZXDH_BAR_MSG_OK;
+}
+
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
+{
+ struct zxdh_res_para param;
+
+ zxdh_fill_res_para(dev, ¶m);
+ int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
new file mode 100644
index 0000000000..f098ae4cf9
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_COMMON_H
+#define ZXDH_COMMON_H
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct zxdh_res_para {
+ uint64_t virt_addr;
+ uint16_t pcie_id;
+ uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
+};
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
+int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_COMMON_H */
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index a729344288..8d9df218ce 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -10,9 +10,21 @@
#include "zxdh_logs.h"
#include "zxdh_pci.h"
#include "zxdh_msg.h"
+#include "zxdh_common.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v)
+{
+ /* epid > 4 is local soft queue. return 1192 */
+ if (v.epid > 4)
+ return 1192;
+ if (v.vf_flag)
+ return v.epid * 256 + v.vfid;
+ else
+ return (v.epid * 8 + v.pfid) + 1152;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
return ret;
}
+static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw)
+{
+ if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get phyport");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport);
+
+ hw->vfid = zxdh_vport_to_vfid(hw->vport);
+
+ if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get panel_id");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id);
+
+ return 0;
+}
+
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_agent_comm(eth_dev, hw);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index bed1334690..1ee8dd744c 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -56,6 +56,7 @@ struct zxdh_hw {
uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint16_t vfid;
uint8_t *isr;
uint8_t weak_barriers;
@@ -65,8 +66,12 @@ struct zxdh_hw {
uint8_t duplex;
uint8_t is_pf;
uint8_t msg_chan_init;
+ uint8_t phyport;
+ uint8_t panel_id;
};
+uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 1bf72a9b7c..4105daf5c6 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -133,7 +133,9 @@ struct zxdh_seqid_ring {
};
struct zxdh_seqid_ring g_seqid_ring = {0};
-static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
+
+static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET;
@@ -209,11 +211,11 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u
*/
static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
{
- int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
+ int lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
bar_base_addr + ZXDH_HW_LABEL_OFFSET);
- lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
+ lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
bar_base_addr + ZXDH_HW_LABEL_OFFSET);
return 0;
@@ -409,7 +411,7 @@ static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint
static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
{
int ret = 0;
- uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+ uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst);
PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid);
if (dst == ZXDH_MSG_CHAN_END_RISC)
@@ -426,7 +428,7 @@ static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_ad
static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
{
- uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst);
+ uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst);
PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid);
if (dst == ZXDH_MSG_CHAN_END_RISC)
@@ -586,7 +588,6 @@ static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid
return ZXDH_BAR_MSG_OK;
}
-static uint8_t temp_msg[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr,
void *payload_addr,
uint16_t payload_len,
@@ -596,13 +597,13 @@ static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr,
ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header);
ret = zxdh_bar_chan_msg_header_get(subchan_addr,
- (struct zxdh_bar_msg_header *)temp_msg);
+ (struct zxdh_bar_msg_header *)tmp_msg_header);
ret = zxdh_bar_chan_msg_payload_set(subchan_addr,
(uint8_t *)(payload_addr), payload_len);
ret = zxdh_bar_chan_msg_payload_get(subchan_addr,
- temp_msg, payload_len);
+ tmp_msg_header, payload_len);
ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED);
return ret;
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 7fbab4b214..7da60ee189 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -107,6 +107,27 @@ enum zxdh_bar_module_id {
ZXDH_BAR_MSG_MODULE_NUM = 100,
};
+enum ZXDH_RES_TBL_FILED {
+ ZXDH_TBL_FIELD_PCIEID = 0,
+ ZXDH_TBL_FIELD_BDF = 1,
+ ZXDH_TBL_FIELD_MSGCH = 2,
+ ZXDH_TBL_FIELD_DATACH = 3,
+ ZXDH_TBL_FIELD_VPORT = 4,
+ ZXDH_TBL_FIELD_PNLID = 5,
+ ZXDH_TBL_FIELD_PHYPORT = 6,
+ ZXDH_TBL_FIELD_SERDES_NUM = 7,
+ ZXDH_TBL_FIELD_NP_PORT = 8,
+ ZXDH_TBL_FIELD_SPEED = 9,
+ ZXDH_TBL_FIELD_HASHID = 10,
+ ZXDH_TBL_FIELD_NON,
+};
+
+enum ZXDH_TBL_MSG_TYPE {
+ ZXDH_TBL_TYPE_READ,
+ ZXDH_TBL_TYPE_WRITE,
+ ZXDH_TBL_TYPE_NON,
+};
+
struct zxdh_msix_para {
uint16_t pcie_id;
uint16_t vector_risc;
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index fd73f14e2d..66f37ec612 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -102,4 +102,8 @@ struct zxdh_virtqueue {
struct zxdh_vq_desc_extra vq_descx[];
} __rte_packed;
+#ifdef __cplusplus
+}
+#endif
+
#endif /* ZXDH_QUEUE_H */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
index ccac7e7834..31b1c8f0a5 100644
--- a/drivers/net/zxdh/zxdh_rxtx.h
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -48,4 +48,8 @@ struct zxdh_virtnet_tx {
const struct rte_memzone *mz; /* mem zone to populate TX ring. */
} __rte_packed;
+#ifdef __cplusplus
+}
+#endif
+
#endif /* ZXDH_RXTX_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 33972 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (5 preceding siblings ...)
2024-10-30 9:01 ` [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-30 9:01 ` [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 24868 bytes --]
configure zxdh intr include risc,dtb. and release intr.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 300 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 8 +
drivers/net/zxdh/zxdh_msg.c | 188 +++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 13 ++
drivers/net/zxdh/zxdh_pci.c | 62 +++++++
drivers/net/zxdh/zxdh_pci.h | 12 ++
6 files changed, 583 insertions(+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 8d9df218ce..5963aed949 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -25,6 +25,301 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
+ }
+}
+
+
+static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (rte_intr_ack(dev->intr_handle) < 0)
+ return -1;
+
+ hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
+
+ return 0;
+}
+
+static void zxdh_devconf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+
+ if (zxdh_intr_unmask(dev) < 0)
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+}
+
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_fromriscv_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_frompfvf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] +
+ ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
+
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
+
+ tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
+ tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled)
+ return 0;
+
+ zxdh_intr_cb_unreg(dev);
+ if (rte_intr_disable(dev->intr_handle) < 0)
+ return -1;
+
+ hw->intr_enabled = 0;
+ return 0;
+}
+
+static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled) {
+ zxdh_intr_cb_reg(dev);
+ ret = rte_intr_enable(dev->intr_handle);
+ if (unlikely(ret))
+ PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
+
+ hw->intr_enabled = 1;
+ }
+ return ret;
+}
+
+static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ ZXDH_VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
+
+ zxdh_queues_unbind_intr(dev);
+ zxdh_intr_disable(dev);
+
+ rte_intr_efd_disable(dev->intr_handle);
+ rte_intr_vec_list_free(dev->intr_handle);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return 0;
+}
+
+static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint8_t i;
+
+ if (!hw->risc_intr) {
+ PMD_INIT_LOG(ERR, " to allocate risc_intr");
+ hw->risc_intr = rte_zmalloc("risc_intr",
+ ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
+ if (hw->risc_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
+ return -ENOMEM;
+ }
+ }
+
+ for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
+ if (dev->intr_handle->efds[i] < 0) {
+ PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ return -1;
+ }
+
+ struct rte_intr_handle *intr_handle = hw->risc_intr + i;
+
+ intr_handle->fd = dev->intr_handle->efds[i];
+ intr_handle->type = dev->intr_handle->type;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->dtb_intr) {
+ hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
+ if (hw->dtb_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
+ return -ENOMEM;
+ }
+ }
+
+ if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
+ PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return -1;
+ }
+ hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
+ hw->dtb_intr->type = dev->intr_handle->type;
+ return 0;
+}
+
+static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+ uint16_t vec;
+
+ if (!dev->data->dev_conf.intr_conf.rxq) {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ i * 2, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ } else {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d",
+ i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec);
+ }
+ }
+ /* mask all txq intr */
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ return 0;
+}
+
+static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t ret = 0;
+
+ if (!rte_intr_cap_multiple(dev->intr_handle)) {
+ PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
+ return -ENOTSUP;
+ }
+ zxdh_intr_release(dev);
+ uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
+
+ if (dev->data->dev_conf.intr_conf.rxq)
+ nb_efd += dev->data->nb_rx_queues;
+
+ if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
+ PMD_INIT_LOG(ERR, "Fail to create eventfd");
+ return -1;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
+ if (zxdh_setup_risc_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ if (zxdh_setup_dtb_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_queues_bind_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_intr_enable(dev) < 0) {
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ return 0;
+
+free_intr_vec:
+ zxdh_intr_release(dev);
+ return ret;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -138,9 +433,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret != 0)
goto err_zxdh_init;
+ ret = zxdh_configure_intr(eth_dev);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
+ zxdh_intr_release(eth_dev);
zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 1ee8dd744c..0a7b574477 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -8,6 +8,10 @@
#include <rte_ether.h>
#include "ethdev_driver.h"
+#include <rte_interrupts.h>
+#include <eal_interrupts.h>
+
+#include "zxdh_queue.h"
#ifdef __cplusplus
extern "C" {
@@ -44,6 +48,9 @@ struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ struct rte_intr_handle *risc_intr;
+ struct rte_intr_handle *dtb_intr;
+ struct zxdh_virtqueue **vqs;
union zxdh_virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -60,6 +67,7 @@ struct zxdh_hw {
uint8_t *isr;
uint8_t weak_barriers;
+ uint8_t intr_enabled;
uint8_t use_msix;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 4105daf5c6..71c199ec2e 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -94,6 +94,12 @@
#define ZXDH_BAR_CHAN_INDEX_SEND 0
#define ZXDH_BAR_CHAN_INDEX_RECV 1
+#define ZXDH_BAR_CHAN_MSG_SYNC 0
+#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0
+#define ZXDH_BAR_CHAN_MSG_EMEC 1
+#define ZXDH_BAR_CHAN_MSG_NO_ACK 0
+#define ZXDH_BAR_CHAN_MSG_ACK 1
+
uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
{ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND},
{ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV},
@@ -135,6 +141,36 @@ struct zxdh_seqid_ring g_seqid_ring = {0};
static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
+static inline const char *zxdh_module_id_name(int val)
+{
+ switch (val) {
+ case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG";
+ case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL";
+ case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX";
+ case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA";
+ case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA";
+ case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO";
+ case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU";
+ case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC";
+ case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA";
+ case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM";
+ case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP";
+ case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT";
+ case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF";
+ case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY";
+ case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE";
+ case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME";
+ case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK";
+ case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO";
+ case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF";
+ case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF";
+ case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH";
+ case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET";
+ case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB";
+ default: return "NA";
+ }
+}
+
static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
@@ -795,3 +831,155 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
}
+
+static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+
+ return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+}
+
+static void zxdh_bar_msg_ack_async_msg_proc(struct zxdh_bar_msg_header *msg_header,
+ uint8_t *receiver_buff)
+{
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
+
+ if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id);
+ return;
+ }
+ if (msg_header->len > reps_info->buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u",
+ reps_info->buffer_len, msg_header->len + 4);
+ goto free_id;
+ }
+ uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr;
+
+ rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len);
+ *(uint16_t *)(reps_buffer + 1) = msg_header->len;
+ *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED;
+
+free_id:
+ zxdh_bar_chan_msgid_free(msg_header->msg_id);
+}
+
+zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM];
+static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr,
+ struct zxdh_bar_msg_header *msg_header,
+ uint8_t *receiver_buff, void *dev)
+{
+ uint8_t *reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0);
+
+ if (reps_buffer == NULL)
+ return;
+
+ zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id];
+ uint16_t reps_len = 0;
+
+ recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev);
+ msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK;
+ msg_header->len = reps_len;
+ zxdh_bar_chan_msg_header_set(reply_addr, msg_header);
+ zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len);
+ zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ rte_free(reps_buffer);
+}
+
+static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type,
+ uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+ uint64_t recv_rep_addr;
+
+ if (sync == ZXDH_BAR_CHAN_MSG_SYNC)
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+ else
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id);
+
+ return recv_rep_addr;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_check(struct zxdh_bar_msg_header *msg_header)
+{
+ if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) {
+ PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used.");
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ uint8_t module_id = msg_header->module_id;
+
+ if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ uint16_t len = msg_header->len;
+
+ if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len);
+ return ZXDH_BAR_MSG_ERR_LEN;
+ }
+ if (msg_recv_func_tbl[msg_header->module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register",
+ zxdh_module_id_name(module_id), module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST;
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev)
+{
+ struct zxdh_bar_msg_header msg_header = {0};
+ uint64_t recv_addr = 0;
+ uint16_t ret = 0;
+
+ recv_addr = zxdh_recv_addr_get(src, dst, virt_addr);
+ if (recv_addr == 0) {
+ PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst);
+ return -1;
+ }
+
+ zxdh_bar_chan_msg_header_get(recv_addr, &msg_header);
+ ret = zxdh_bar_chan_msg_header_check(&msg_header);
+
+ if (ret != ZXDH_BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret);
+ return -1;
+ }
+
+ uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0);
+ if (recved_msg == NULL) {
+ PMD_MSG_LOG(ERR, "malloc temp buff failed.");
+ return -1;
+ }
+ zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len);
+
+ uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr);
+
+ if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) {
+ zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) {
+ zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg);
+ goto exit;
+ }
+ return 0;
+
+exit:
+ rte_free(recved_msg);
+ return ZXDH_BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 7da60ee189..691a847462 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -15,8 +15,15 @@ extern "C" {
#define ZXDH_BAR0_INDEX 0
#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000)
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3
+#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM)
+#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1
+#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1)
+#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM)
+#define ZXDH_QUEUE_INTR_VEC_NUM 256
#define ZXDH_BAR_MSG_POLLING_SPAN 100
#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
@@ -202,6 +209,10 @@ struct zxdh_bar_msg_header {
uint16_t dst_pcieid; /* used in PF-->VF */
};
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len,
+ void *reps_buffer, uint16_t *reps_len, void *dev);
+
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
@@ -210,6 +221,8 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
struct zxdh_msg_recviver_mem *result);
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index 8fcab6e888..5bd0df11da 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -92,6 +92,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
rte_write32(features >> 32, &hw->common_cfg->guest_feature);
}
+static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec)
+{
+ rte_write16(vec, &hw->common_cfg->msix_config);
+ return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+ rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+ return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
+{
+ return rte_read8(hw->isr);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -99,8 +117,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_status = zxdh_set_status,
.get_features = zxdh_get_features,
.set_features = zxdh_set_features,
+ .set_queue_irq = zxdh_set_queue_irq,
+ .set_config_irq = zxdh_set_config_irq,
+ .get_isr = zxdh_get_isr,
};
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
+{
+ return ZXDH_VTPCI_OPS(hw)->get_isr(hw);
+}
+
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw)
{
return ZXDH_VTPCI_OPS(hw)->get_features(hw);
@@ -283,3 +309,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
return 0;
}
+
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev)
+{
+ uint8_t pos = 0;
+ int32_t ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST);
+
+ if (ret != 1) {
+ PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret);
+ return ZXDH_MSIX_NONE;
+ }
+ while (pos) {
+ uint8_t cap[2] = {0};
+
+ ret = rte_pci_read_config(dev, cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap[0] == ZXDH_PCI_CAP_ID_MSIX) {
+ uint16_t flags = 0;
+
+ ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap));
+ if (ret != sizeof(flags)) {
+ PMD_INIT_LOG(ERR,
+ "failed to read pci cap at pos: %x ret %d", pos + 2, ret);
+ break;
+ }
+ if (flags & ZXDH_PCI_MSIX_ENABLE)
+ return ZXDH_MSIX_ENABLED;
+ else
+ return ZXDH_MSIX_DISABLED;
+ }
+ pos = cap[1];
+ }
+ return ZXDH_MSIX_NONE;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index bb5ae64ddf..55d8c5449c 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -22,6 +22,13 @@ enum zxdh_msix_status {
ZXDH_MSIX_ENABLED = 2
};
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define ZXDH_PCI_ISR_INTR 0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define ZXDH_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define ZXDH_MSI_NO_VECTOR 0x7F
+
#define ZXDH_PCI_CAPABILITY_LIST 0x34
#define ZXDH_PCI_CAP_ID_VNDR 0x09
#define ZXDH_PCI_CAP_ID_MSIX 0x11
@@ -124,6 +131,9 @@ struct zxdh_pci_ops {
uint64_t (*get_features)(struct zxdh_hw *hw);
void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+ uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec);
+ uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
+ uint8_t (*get_isr)(struct zxdh_hw *hw);
};
struct zxdh_hw_internal {
@@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
+uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
+enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
#ifdef __cplusplus
}
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 53446 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (6 preceding siblings ...)
2024-10-30 9:01 ` [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
2024-10-30 9:01 ` [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3513 bytes --]
Add support for zxdh infos get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 44 +++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_ethdev.h | 3 +++
2 files changed, 46 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 5963aed949..bf0d9b7b3a 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -25,6 +25,43 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX);
+ dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX);
+ dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX);
+ dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE;
+ dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN;
+ dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS;
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER);
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM);
+
+ return 0;
+}
+
static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
@@ -320,6 +357,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+/* dev_ops for zxdh, bare necessities for basic operation */
+static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_infos_get = zxdh_dev_infos_get,
+};
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -376,7 +418,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
struct zxdh_hw *hw = eth_dev->data->dev_private;
int ret = 0;
- eth_dev->dev_ops = NULL;
+ eth_dev->dev_ops = &zxdh_eth_dev_ops;
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 0a7b574477..78f5ca6c00 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -33,6 +33,9 @@ extern "C" {
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+#define ZXDH_MIN_RX_BUFSIZE 64
+#define ZXDH_MAX_RX_PKTLEN 14000U
+
union zxdh_virport_num {
uint16_t vport;
struct {
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 7588 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (7 preceding siblings ...)
2024-10-30 9:01 ` [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
@ 2024-10-30 9:01 ` Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 40973 bytes --]
provided zxdh dev configure ops for queue
check,reset,alloc resources,etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 135 ++++++++++
drivers/net/zxdh/zxdh_common.h | 12 +
drivers/net/zxdh/zxdh_ethdev.c | 450 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 34 ++-
drivers/net/zxdh/zxdh_pci.c | 98 +++++++
drivers/net/zxdh/zxdh_pci.h | 37 ++-
drivers/net/zxdh/zxdh_queue.c | 123 +++++++++
drivers/net/zxdh/zxdh_queue.h | 173 +++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 4 +-
10 files changed, 1051 insertions(+), 16 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_queue.c
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index a16db47f89..b96aa5a27e 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -18,4 +18,5 @@ sources = files(
'zxdh_pci.c',
'zxdh_msg.c',
'zxdh_common.c',
+ 'zxdh_queue.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
index 0cb5380c5e..5ff01c418e 100644
--- a/drivers/net/zxdh/zxdh_common.c
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -20,6 +20,7 @@
#define ZXDH_COMMON_TABLE_WRITE 1
#define ZXDH_COMMON_FIELD_PHYPORT 6
+#define ZXDH_COMMON_FIELD_DATACH 3
#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2)
@@ -248,3 +249,137 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
int32_t ret = zxdh_get_res_panel_id(¶m, pannelid);
return ret;
}
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ uint32_t val = *((volatile uint32_t *)(baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ *((volatile uint32_t *)(baseaddr + reg)) = val;
+}
+
+static bool zxdh_try_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ /* check whether lock is used */
+ if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
+ return false;
+
+ return true;
+}
+
+int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us)
+{
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ rte_delay_us_block(us);
+ /* acquire hw lock */
+ if (!zxdh_try_lock(hw)) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout);
+ continue;
+ }
+ break;
+ }
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_release_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
+ var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
+ zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
+ }
+}
+
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg)
+{
+ uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val)
+{
+ *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val;
+}
+
+static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_pci_bar_msg desc;
+ struct zxdh_msg_recviver_mem msg_rsp;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+ if (buff_size != 0 && buff == NULL) {
+ PMD_DRV_LOG(ERR, "Buff is invalid");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE,
+ field, buff, buff_size);
+
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_datach_set(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t buff_size = (hw->queue_num + 1) * 2;
+ void *buff = rte_zmalloc(NULL, buff_size, 0);
+
+ if (unlikely(buff == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate buff");
+ return -ENOMEM;
+ }
+ memset(buff, 0, buff_size);
+ uint16_t *pdata = (uint16_t *)buff;
+ *pdata++ = hw->queue_num;
+ uint16_t i;
+
+ for (i = 0; i < hw->queue_num; i++)
+ *(pdata + i) = hw->channel_context[i].ph_chno;
+
+ int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH,
+ (void *)buff, buff_size);
+
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to setup data channel of common table");
+
+ rte_free(buff);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
index f098ae4cf9..a60e25b2e3 100644
--- a/drivers/net/zxdh/zxdh_common.h
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -14,6 +14,10 @@
extern "C" {
#endif
+#define ZXDH_VF_LOCK_REG 0x90
+#define ZXDH_VF_LOCK_ENABLE_MASK 0x1
+#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10
+
struct zxdh_res_para {
uint64_t virt_addr;
uint16_t pcie_id;
@@ -23,6 +27,14 @@ struct zxdh_res_para {
int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
+void zxdh_release_lock(struct zxdh_hw *hw);
+int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us);
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg);
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val);
+int32_t zxdh_datach_set(struct rte_eth_dev *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index bf0d9b7b3a..ceaba444d4 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -11,6 +11,7 @@
#include "zxdh_pci.h"
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#include "zxdh_queue.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -357,8 +358,457 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+static int32_t zxdh_features_update(struct zxdh_hw *hw,
+ const struct rte_eth_rxmode *rxmode,
+ const struct rte_eth_txmode *txmode)
+{
+ uint64_t rx_offloads = rxmode->offloads;
+ uint64_t tx_offloads = txmode->offloads;
+ uint64_t req_features = hw->guest_features;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM);
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) |
+ (1ULL << ZXDH_NET_F_GUEST_TSO6);
+
+ if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_CSUM);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) |
+ (1ULL << ZXDH_NET_F_HOST_TSO6);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_UFO);
+
+ req_features = req_features & hw->host_features;
+ hw->guest_features = req_features;
+
+ ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features);
+
+ if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) &&
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) {
+ PMD_DRV_LOG(ERR, "rx checksum not available on this host");
+ return -ENOTSUP;
+ }
+
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) {
+ PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+static bool rx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6);
+}
+
+static bool tx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO);
+}
+
+static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t i = 0;
+
+ const char *type = NULL;
+ struct zxdh_virtqueue *vq = NULL;
+ struct rte_mbuf *buf = NULL;
+ int32_t queue_type = 0;
+
+ if (hw->vqs == NULL)
+ return;
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (!vq)
+ continue;
+
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == ZXDH_VTNET_RQ)
+ type = "rxq";
+ else if (queue_type == ZXDH_VTNET_TQ)
+ type = "txq";
+ else
+ continue;
+ PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
+
+ while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL)
+ rte_pktmbuf_free(buf);
+ }
+}
+
+static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1;
+ uint16_t i = 0;
+ uint16_t j = 0;
+ uint16_t done = 0;
+ int32_t ret = 0;
+
+ ret = zxdh_timedlock(hw, 1000);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout");
+ return -1;
+ }
+
+ /* Iterate COI table and find free channel */
+ for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) {
+ uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t));
+ uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+
+ for (j = base; j < 32; j += 2) {
+ /* Got the available channel & update COI table */
+ if ((var & (1 << j)) == 0) {
+ var |= (1 << j);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+ done = 1;
+ break;
+ }
+ }
+ if (done)
+ break;
+ }
+ zxdh_release_lock(hw);
+ /* check for no channel condition */
+ if (done != 1) {
+ PMD_INIT_LOG(ERR, "NO availd queues");
+ return -1;
+ }
+ /* reruen available channel ID */
+ return (i * 32) + j;
+}
+
+static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (hw->channel_context[lch].valid == 1) {
+ PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u",
+ lch, hw->channel_context[lch].ph_chno);
+ return hw->channel_context[lch].ph_chno;
+ }
+ int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch));
+
+ if (pch < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ hw->channel_context[lch].ph_chno = (uint16_t)pch;
+ hw->channel_context[lch].valid = 1;
+ PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch);
+ return 0;
+}
+
+static void zxdh_init_vring(struct zxdh_virtqueue *vq)
+{
+ int32_t size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries);
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ virtqueue_disable_intr(vq);
+}
+
+static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx)
+{
+ char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0};
+ char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0};
+ const struct rte_memzone *mz = NULL;
+ const struct rte_memzone *hdr_mz = NULL;
+ uint32_t size = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct zxdh_virtnet_rx *rxvq = NULL;
+ struct zxdh_virtnet_tx *txvq = NULL;
+ struct zxdh_virtqueue *vq = NULL;
+ size_t sz_hdr_mz = 0;
+ void *sw_ring = NULL;
+ int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx);
+ int32_t numa_node = dev->device->numa_node;
+ uint16_t vtpci_phy_qidx = 0;
+ uint32_t vq_size = 0;
+ int32_t ret = 0;
+
+ if (hw->channel_context[vtpci_logic_qidx].valid == 0) {
+ PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx);
+ return -EINVAL;
+ }
+ vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno;
+
+ PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d",
+ vtpci_logic_qidx, vtpci_phy_qidx, numa_node);
+
+ vq_size = ZXDH_QUEUE_DEPTH;
+
+ if (ZXDH_VTPCI_OPS(hw)->set_queue_num != NULL)
+ ZXDH_VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size);
+
+ snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx);
+
+ size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct zxdh_vq_desc_extra),
+ RTE_CACHE_LINE_SIZE);
+ if (queue_type == ZXDH_VTNET_TQ) {
+ /*
+ * For each xmit packet, allocate a zxdh_net_hdr
+ * and indirect ring elements
+ */
+ sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region);
+ }
+
+ vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return -ENOMEM;
+ }
+ hw->vqs[vtpci_logic_qidx] = vq;
+
+ vq->hw = hw;
+ vq->vq_queue_index = vtpci_phy_qidx;
+ vq->vq_nentries = vq_size;
+
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ if (queue_type == ZXDH_VTNET_RQ)
+ vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ ZXDH_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(vq_name);
+ if (mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+
+ memset(mz->addr, 0, mz->len);
+
+ vq->vq_ring_mem = mz->iova;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ zxdh_init_vring(vq);
+
+ if (sz_hdr_mz) {
+ snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+ dev->data->port_id, vtpci_phy_qidx);
+ hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ RTE_CACHE_LINE_SIZE);
+ if (hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ hdr_mz = rte_memzone_lookup(vq_hdr_name);
+ if (hdr_mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+ }
+
+ if (queue_type == ZXDH_VTNET_RQ) {
+ size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+
+ vq->sw_ring = sw_ring;
+ rxvq = &vq->rxq;
+ rxvq->vq = vq;
+ rxvq->port_id = dev->data->port_id;
+ rxvq->mz = mz;
+ } else { /* queue_type == VTNET_TQ */
+ txvq = &vq->txq;
+ txvq->vq = vq;
+ txvq->port_id = dev->data->port_id;
+ txvq->mz = mz;
+ txvq->zxdh_net_hdr_mz = hdr_mz;
+ txvq->zxdh_net_hdr_mem = hdr_mz->iova;
+ }
+
+ vq->offset = offsetof(struct rte_mbuf, buf_iova);
+ if (queue_type == ZXDH_VTNET_TQ) {
+ struct zxdh_tx_region *txr = hdr_mz->addr;
+ uint32_t i;
+
+ memset(txr, 0, vq_size * sizeof(*txr));
+ for (i = 0; i < vq_size; i++) {
+ /* first indirect descriptor is always the tx header */
+ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir;
+
+ vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir));
+ start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) +
+ offsetof(struct zxdh_tx_region, tx_hdr);
+ /* length will be updated to actual pi hdr size when xmit pkt */
+ start_dp->len = 0;
+ }
+ }
+ if (ZXDH_VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ return -EINVAL;
+ }
+ return 0;
+fail_q_alloc:
+ rte_free(sw_ring);
+ rte_memzone_free(hdr_mz);
+ rte_memzone_free(mz);
+ rte_free(vq);
+ return ret;
+}
+
+static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq)
+{
+ uint16_t lch;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct zxdh_virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vqs");
+ return -ENOMEM;
+ }
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (zxdh_acquire_channel(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire the channels");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ if (zxdh_init_queue(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to alloc virtio queue");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+static int32_t zxdh_dev_configure(struct rte_eth_dev *dev)
+{
+ const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint32_t nr_vq = 0;
+ int32_t ret = 0;
+
+ if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+ if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues,
+ ZXDH_QUEUES_NUM_MAX);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+
+ ret = zxdh_features_update(hw, rxmode, txmode);
+ if (ret < 0)
+ return ret;
+
+ /* check if lsc interrupt feature is enabled */
+ if (dev->data->dev_conf.intr_conf.lsc) {
+ if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+ PMD_DRV_LOG(ERR, "link status not supported by host");
+ return -ENOTSUP;
+ }
+ }
+
+ hw->has_tx_offload = tx_offload_enabled(hw);
+ hw->has_rx_offload = rx_offload_enabled(hw);
+
+ nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues;
+ if (nr_vq == hw->queue_num)
+ return 0;
+
+ PMD_DRV_LOG(DEBUG, "queue changed need reset ");
+ /* Reset the device although not necessary at startup */
+ zxdh_vtpci_reset(hw);
+
+ /* Tell the host we've noticed this device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK);
+
+ /* Tell the host we've known how to drive the device. */
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER);
+ /* The queue needs to be released when reconfiguring*/
+ if (hw->vqs != NULL) {
+ zxdh_dev_free_mbufs(dev);
+ zxdh_free_queues(dev);
+ }
+
+ hw->queue_num = nr_vq;
+ ret = zxdh_alloc_queues(dev, nr_vq);
+ if (ret < 0)
+ return ret;
+
+ zxdh_datach_set(dev);
+
+ if (zxdh_configure_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to configure interrupt");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+
+ zxdh_vtpci_reinit_complete(hw);
+
+ return ret;
+}
+
/* dev_ops for zxdh, bare necessities for basic operation */
static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_configure = zxdh_dev_configure,
.dev_infos_get = zxdh_dev_infos_get,
};
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 78f5ca6c00..49ddb505d8 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -11,8 +11,6 @@
#include <rte_interrupts.h>
#include <eal_interrupts.h>
-#include "zxdh_queue.h"
-
#ifdef __cplusplus
extern "C" {
#endif
@@ -25,16 +23,23 @@ extern "C" {
#define ZXDH_E312_PF_DEVICEID 0x8049
#define ZXDH_E312_VF_DEVICEID 0x8060
-#define ZXDH_MAX_UC_MAC_ADDRS 32
-#define ZXDH_MAX_MC_MAC_ADDRS 32
-#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+#define ZXDH_NUM_BARS 2
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
-#define ZXDH_NUM_BARS 2
-#define ZXDH_RX_QUEUES_MAX 128U
-#define ZXDH_TX_QUEUES_MAX 128U
+#define ZXDH_MIN_RX_BUFSIZE 64
+#define ZXDH_MAX_RX_PKTLEN 14000U
+#define ZXDH_QUEUE_DEPTH 1024
+#define ZXDH_QUEUES_BASE 0
+#define ZXDH_TOTAL_QUEUES_NUM 4096
+#define ZXDH_QUEUES_NUM_MAX 256
+#define ZXDH_QUERES_SHARE_BASE (0x5000)
-#define ZXDH_MIN_RX_BUFSIZE 64
-#define ZXDH_MAX_RX_PKTLEN 14000U
+#define ZXDH_MBUF_BURST_SZ 64
union zxdh_virport_num {
uint16_t vport;
@@ -47,6 +52,11 @@ union zxdh_virport_num {
};
};
+struct zxdh_chnl_context {
+ uint16_t valid;
+ uint16_t ph_chno;
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
@@ -54,6 +64,7 @@ struct zxdh_hw {
struct rte_intr_handle *risc_intr;
struct rte_intr_handle *dtb_intr;
struct zxdh_virtqueue **vqs;
+ struct zxdh_chnl_context channel_context[ZXDH_QUEUES_NUM_MAX];
union zxdh_virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -67,6 +78,7 @@ struct zxdh_hw {
uint16_t device_id;
uint16_t port_id;
uint16_t vfid;
+ uint16_t queue_num;
uint8_t *isr;
uint8_t weak_barriers;
@@ -79,6 +91,8 @@ struct zxdh_hw {
uint8_t msg_chan_init;
uint8_t phyport;
uint8_t panel_id;
+ uint8_t has_tx_offload;
+ uint8_t has_rx_offload;
};
uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v);
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index 5bd0df11da..f38de20baf 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -110,6 +110,87 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
return rte_read8(hw->isr);
}
+static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ rte_write16(vq_size, &hw->common_cfg->queue_size);
+}
+
+static int32_t check_vq_phys_addr_ok(struct zxdh_virtqueue *vq)
+{
+ if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) {
+ PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+ return 0;
+ }
+ return 1;
+}
+
+static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+ rte_write32(val & ((1ULL << 32) - 1), lo);
+ rte_write32(val >> 32, hi);
+}
+
+static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq)
+{
+ uint64_t desc_addr = 0;
+ uint64_t avail_addr = 0;
+ uint64_t used_addr = 0;
+ uint16_t notify_off = 0;
+
+ if (!check_vq_phys_addr_ok(vq))
+ return -1;
+
+ desc_addr = vq->vq_ring_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc);
+ if (vtpci_packed_queue(vq->hw)) {
+ used_addr = RTE_ALIGN_CEIL((avail_addr +
+ sizeof(struct zxdh_vring_packed_desc_event)),
+ ZXDH_PCI_VRING_ALIGN);
+ } else {
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct zxdh_vring_avail,
+ ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN);
+ }
+
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */
+ notify_off = 0;
+ vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+ notify_off * hw->notify_off_multiplier);
+
+ rte_write16(1, &hw->common_cfg->queue_enable);
+
+ return 0;
+}
+
+static void zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -120,6 +201,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_queue_irq = zxdh_set_queue_irq,
.set_config_irq = zxdh_set_config_irq,
.get_isr = zxdh_get_isr,
+ .get_queue_num = zxdh_get_queue_num,
+ .set_queue_num = zxdh_set_queue_num,
+ .setup_queue = zxdh_setup_queue,
+ .del_queue = zxdh_del_queue,
};
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw)
@@ -146,6 +231,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw)
PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
}
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw)
+{
+ zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK);
+}
+
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ if (status != ZXDH_CONFIG_STATUS_RESET)
+ status |= ZXDH_VTPCI_OPS(hw)->get_status(hw);
+
+ ZXDH_VTPCI_OPS(hw)->set_status(hw, status);
+}
+
static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
{
uint8_t bar = cap->bar;
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index 55d8c5449c..231824f3c4 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -8,7 +8,9 @@
#include <stdint.h>
#include <stdbool.h>
+#include <rte_pci.h>
#include <bus_pci_driver.h>
+#include <ethdev_driver.h>
#include "zxdh_ethdev.h"
@@ -29,13 +31,25 @@ enum zxdh_msix_status {
/* Vector value used to disable MSI for queue. */
#define ZXDH_MSI_NO_VECTOR 0x7F
-#define ZXDH_PCI_CAPABILITY_LIST 0x34
-#define ZXDH_PCI_CAP_ID_VNDR 0x09
-#define ZXDH_PCI_CAP_ID_MSIX 0x11
+#define ZXDH_PCI_CAPABILITY_LIST 0x34
+#define ZXDH_PCI_CAP_ID_VNDR 0x09
+#define ZXDH_PCI_CAP_ID_MSIX 0x11
-#define ZXDH_PCI_MSIX_ENABLE 0x8000
+#define ZXDH_PCI_MSIX_ENABLE 0x8000
+#define ZXDH_PCI_VRING_ALIGN 4096
+#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
+#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
+#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */
#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
+#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
+#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */
+
+#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */
+#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */
+#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */
#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
@@ -60,6 +74,8 @@ enum zxdh_msix_status {
#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define ZXDH_CONFIG_STATUS_FAILED 0x80
+#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12
+
struct zxdh_net_config {
/* The config defining mac address (if ZXDH_NET_F_MAC) */
uint8_t mac[RTE_ETHER_ADDR_LEN];
@@ -122,6 +138,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_F_RING_PACKED);
+}
+
struct zxdh_pci_ops {
void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
@@ -134,6 +155,11 @@ struct zxdh_pci_ops {
uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec);
uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
uint8_t (*get_isr)(struct zxdh_hw *hw);
+ uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id);
+ void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size);
+
+ int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq);
+ void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq);
};
struct zxdh_hw_internal {
@@ -156,6 +182,9 @@ uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw);
uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw);
enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev);
+void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw);
+void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c
new file mode 100644
index 0000000000..2978a9f272
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "zxdh_queue.h"
+#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+#include "zxdh_common.h"
+#include "zxdh_msg.h"
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq)
+{
+ struct rte_mbuf *cookie = NULL;
+ int32_t idx = 0;
+
+ if (vq == NULL)
+ return NULL;
+
+ for (idx = 0; idx < vq->vq_nentries; idx++) {
+ cookie = vq->vq_descx[idx].cookie;
+ if (cookie != NULL) {
+ vq->vq_descx[idx].cookie = NULL;
+ return cookie;
+ }
+ }
+ return NULL;
+}
+
+static int32_t zxdh_release_channel(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t var = 0;
+ uint32_t addr = 0;
+ uint32_t widx = 0;
+ uint32_t bidx = 0;
+ uint16_t pch = 0;
+ uint16_t lch = 0;
+ int32_t ret = 0;
+
+ ret = zxdh_timedlock(hw, 1000);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout");
+ return -1;
+ }
+
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (hw->channel_context[lch].valid == 0) {
+ PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
+ continue;
+ }
+
+ pch = hw->channel_context[lch].ph_chno;
+ widx = pch / 32;
+ bidx = pch % 32;
+
+ addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
+ var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+ var &= ~(1 << bidx);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+
+ hw->channel_context[lch].valid = 0;
+ hw->channel_context[lch].ph_chno = 0;
+ }
+
+ zxdh_release_lock(hw);
+
+ return 0;
+}
+
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx)
+{
+ if (vtpci_queue_idx % 2 == 0)
+ return ZXDH_VTNET_RQ;
+ else
+ return ZXDH_VTNET_TQ;
+}
+
+int32_t zxdh_free_queues(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ struct zxdh_virtqueue *vq = NULL;
+ int32_t queue_type = 0;
+ uint16_t i = 0;
+
+ if (hw->vqs == NULL)
+ return 0;
+
+ if (zxdh_release_channel(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to clear coi table");
+ return -1;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (vq == NULL)
+ continue;
+
+ ZXDH_VTPCI_OPS(hw)->del_queue(hw, vq);
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == ZXDH_VTNET_RQ) {
+ rte_free(vq->sw_ring);
+ rte_memzone_free(vq->rxq.mz);
+ } else if (queue_type == ZXDH_VTNET_TQ) {
+ rte_memzone_free(vq->txq.mz);
+ rte_memzone_free(vq->txq.zxdh_net_hdr_mz);
+ }
+
+ rte_free(vq);
+ hw->vqs[i] = NULL;
+ PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
+ }
+
+ rte_free(hw->vqs);
+ hw->vqs = NULL;
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index 66f37ec612..a9ba0be0d0 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -11,11 +11,30 @@
#include "zxdh_ethdev.h"
#include "zxdh_rxtx.h"
+#include "zxdh_pci.h"
#ifdef __cplusplus
extern "C" {
#endif
+enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 };
+
+#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32
+#define ZXDH_RQ_QUEUE_IDX 0
+#define ZXDH_TQ_QUEUE_IDX 1
+#define ZXDH_MAX_TX_INDIRECT 8
+
+/* This marks a buffer as write-only (otherwise read-only). */
+#define ZXDH_VRING_DESC_F_WRITE 2
+/* This flag means the descriptor was made available by the driver */
+#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7))
+
+#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0
+#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1
+#define ZXDH_RING_EVENT_FLAGS_DESC 0x2
+
+#define ZXDH_VQ_RING_DESC_CHAIN_END 32768
+
/** ring descriptors: 16 bytes.
* These can chain together via "next".
**/
@@ -26,6 +45,19 @@ struct zxdh_vring_desc {
uint16_t next; /* We chain unused descriptors via this. */
} __rte_packed;
+struct zxdh_vring_used_elem {
+ /* Index of start of used descriptor chain. */
+ uint32_t id;
+ /* Total length of the descriptor chain which was written to. */
+ uint32_t len;
+};
+
+struct zxdh_vring_used {
+ uint16_t flags;
+ uint16_t idx;
+ struct zxdh_vring_used_elem ring[];
+};
+
struct zxdh_vring_avail {
uint16_t flags;
uint16_t idx;
@@ -102,6 +134,147 @@ struct zxdh_virtqueue {
struct zxdh_vq_desc_extra vq_descx[];
} __rte_packed;
+struct zxdh_type_hdr {
+ uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */
+ uint8_t pd_len;
+ uint8_t num_buffers;
+ uint8_t reserved;
+} __rte_packed; /* 4B */
+
+struct zxdh_pi_hdr {
+ uint8_t pi_len;
+ uint8_t pkt_type;
+ uint16_t vlan_id;
+ uint32_t ipv6_extend;
+ uint16_t l3_offset;
+ uint16_t l4_offset;
+ uint8_t phy_port;
+ uint8_t pkt_flag_hi8;
+ uint16_t pkt_flag_lw16;
+ union {
+ struct {
+ uint64_t sa_idx;
+ uint8_t reserved_8[8];
+ } dl;
+ struct {
+ uint32_t lro_flag;
+ uint32_t lro_mss;
+ uint16_t err_code;
+ uint16_t pm_id;
+ uint16_t pkt_len;
+ uint8_t reserved[2];
+ } ul;
+ };
+} __rte_packed; /* 32B */
+
+struct zxdh_pd_hdr_dl {
+ uint32_t ol_flag;
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t dst_vfid;
+ uint32_t svlan_insert;
+ uint32_t cvlan_insert;
+} __rte_packed; /* 16B */
+
+struct zxdh_net_hdr_dl {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_dl pd_hdr; /* 16B */
+} __rte_packed;
+
+struct zxdh_pd_hdr_ul {
+ uint32_t pkt_flag;
+ uint32_t rss_hash;
+ uint32_t fd;
+ uint32_t striped_vlan_tci;
+ /* ovs */
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t src_vfid;
+ /* */
+ uint16_t pkt_type_out;
+ uint16_t pkt_type_in;
+} __rte_packed; /* 24B */
+
+struct zxdh_net_hdr_ul {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_ul pd_hdr; /* 24B */
+} __rte_packed; /* 60B */
+
+struct zxdh_tx_region {
+ struct zxdh_net_hdr_dl tx_hdr;
+ union {
+ struct zxdh_vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT];
+ struct zxdh_vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT];
+ } __rte_packed;
+};
+
+static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align)
+{
+ size_t size;
+
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct zxdh_vring_packed_desc);
+ size += sizeof(struct zxdh_vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct zxdh_vring_packed_desc_event);
+ return size;
+ }
+
+ size = num * sizeof(struct zxdh_vring_desc);
+ size += sizeof(struct zxdh_vring_avail) + (num * sizeof(uint16_t));
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct zxdh_vring_used) + (num * sizeof(struct zxdh_vring_used_elem));
+ return size;
+}
+
+static inline void vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p,
+ unsigned long align, uint32_t num)
+{
+ vr->num = num;
+ vr->desc = (struct zxdh_vring_packed_desc *)p;
+ vr->driver = (struct zxdh_vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct zxdh_vring_packed_desc));
+ vr->device = (struct zxdh_vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct zxdh_vring_packed_desc_event)), align);
+}
+
+static inline void vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END;
+}
+
+static inline void vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n; i++) {
+ dp[i].id = (uint16_t)i;
+ dp[i].flags = ZXDH_VRING_DESC_F_WRITE;
+ }
+}
+
+static inline void virtqueue_disable_intr(struct zxdh_virtqueue *vq)
+{
+ if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow;
+ }
+}
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq);
+int32_t zxdh_free_queues(struct rte_eth_dev *dev);
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx);
+
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
index 31b1c8f0a5..7d4b5481ec 100644
--- a/drivers/net/zxdh/zxdh_rxtx.h
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -40,8 +40,8 @@ struct zxdh_virtnet_rx {
struct zxdh_virtnet_tx {
struct zxdh_virtqueue *vq;
- const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */
- rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */
+ const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */
uint16_t queue_id; /* DPDK queue index. */
uint16_t port_id; /* Device port identifier. */
struct zxdh_virtnet_stats stats;
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 94517 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* Re: [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation
2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-10-30 14:55 ` David Marchand
0 siblings, 0 replies; 76+ messages in thread
From: David Marchand @ 2024-10-30 14:55 UTC (permalink / raw)
To: Junlong Wang
Cc: dev, wang.yong19, Maxime Coquelin, Stephen Hemminger,
Thomas Monjalon, Ferruh Yigit
On Wed, Oct 30, 2024 at 10:07 AM Junlong Wang <wang.junlong1@zte.com.cn> wrote:
>
> Add device pci init implementation,
> to obtain PCI capability and read configuration, etc.
This title with PCI intrigued me, so I had a look at this patch.
Please use the PCI bus API and don't redefine common PCI constants/helpers.
To make it easier for you, I recommend having a look at:
$ git show a10b6e53fe baa9c55009 7bb1168d98 -- drivers/net/virtio/virtio_pci.c
--
David Marchand
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 0/9] net/zxdh: introduce net zxdh driver
2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
` (8 more replies)
0 siblings, 9 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3269 bytes --]
v9:
- fix 'v8 3/9' patch use PCI bus API,
and common PCI constants according to David Marchand's comments.
v8:
- fix flexible arrays、Waddress-of-packed-member error.
- all structs、enum、define ,etc use zxdh/ZXDH_ prefixed.
- use zxdh_try/release_lock,and move loop into zxdh_timedlock,
make hardware lock follow spinlock pattern.
v7:
- add release notes and modify zxdh.rst issues.
- avoid use pthread and use rte_spinlock_lock.
- using the prefix ZXDH_ before some definitions.
- resole issues according to thomas's comments.
v6:
- Resolve ci/intel compilation issues.
- fix meson.build indentation in earlier patch.
V5:
- split driver into multiple patches,part of the zxdh driver,
later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc.
- fix errors reported by scripts.
- move the product link in zxdh.rst.
- fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64.
- modify other comments according to Ferruh's comments.
Junlong Wang (9):
net/zxdh: add zxdh ethdev pmd driver
net/zxdh: add logging implementation
net/zxdh: add zxdh device pci init implementation
net/zxdh: add msg chan and msg hwlock init
net/zxdh: add msg chan enable implementation
net/zxdh: add zxdh get device backend infos
net/zxdh: add configure zxdh intr implementation
net/zxdh: add zxdh dev infos get ops
net/zxdh: add zxdh dev configure ops
MAINTAINERS | 6 +
doc/guides/nics/features/zxdh.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 31 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 22 +
drivers/net/zxdh/zxdh_common.c | 385 ++++++++++
drivers/net/zxdh/zxdh_common.h | 42 ++
drivers/net/zxdh/zxdh_ethdev.c | 994 +++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 102 +++
drivers/net/zxdh/zxdh_logs.h | 40 +
drivers/net/zxdh/zxdh_msg.c | 986 ++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 229 ++++++
drivers/net/zxdh/zxdh_pci.c | 402 ++++++++++
drivers/net/zxdh/zxdh_pci.h | 175 +++++
drivers/net/zxdh/zxdh_queue.c | 123 +++
drivers/net/zxdh/zxdh_queue.h | 281 +++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 ++
19 files changed, 3888 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
create mode 100644 drivers/net/zxdh/zxdh_logs.h
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.c
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6482 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang
` (7 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 8780 bytes --]
Add basic zxdh ethdev init and register PCI probe functions
Update doc files.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
MAINTAINERS | 6 ++
doc/guides/nics/features/zxdh.ini | 9 +++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/zxdh.rst | 31 +++++++++
doc/guides/rel_notes/release_24_11.rst | 4 ++
drivers/net/meson.build | 1 +
drivers/net/zxdh/meson.build | 18 +++++
drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++
9 files changed, 206 insertions(+)
create mode 100644 doc/guides/nics/features/zxdh.ini
create mode 100644 doc/guides/nics/zxdh.rst
create mode 100644 drivers/net/zxdh/meson.build
create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 8919d78919..a5534be2ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1051,6 +1051,12 @@ F: drivers/net/virtio/
F: doc/guides/nics/virtio.rst
F: doc/guides/nics/features/virtio*.ini
+ZTE zxdh
+M: Lijie Shan <shan.lijie@zte.com.cn>
+F: drivers/net/zxdh/
+F: doc/guides/nics/zxdh.rst
+F: doc/guides/nics/features/zxdh.ini
+
Wind River AVP
M: Steven Webster <steven.webster@windriver.com>
M: Matt Peters <matt.peters@windriver.com>
diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
new file mode 100644
index 0000000000..05c8091ed7
--- /dev/null
+++ b/doc/guides/nics/features/zxdh.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'zxdh' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..8e371ac4a5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -69,3 +69,4 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ zxdh
diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
new file mode 100644
index 0000000000..920ff5175e
--- /dev/null
+++ b/doc/guides/nics/zxdh.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 ZTE Corporation.
+
+ZXDH Poll Mode Driver
+======================
+
+The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
+for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
+the ZTE Ethernet Controller E310/E312.
+
+- Learn about ZXDH NX Series Ethernet Controller NICs using
+ `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_.
+
+Features
+--------
+
+Features of the ZXDH PMD are:
+
+- Multi arch support: x86_64, ARMv8.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+
+X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet.
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 15b64a1829..dd9048d561 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -162,6 +162,10 @@ New Features
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
+* **Updated ZTE zxdh net driver.**
+
+ * Added ethdev driver support for zxdh NX Series Ethernet Controller.
+
* **Added cryptodev queue pair reset support.**
A new API ``rte_cryptodev_queue_pair_reset`` is added
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b782..0a12914534 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -62,6 +62,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
new file mode 100644
index 0000000000..932fb1c835
--- /dev/null
+++ b/drivers/net/zxdh/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 ZTE Corporation
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+sources = files(
+ 'zxdh_ethdev.c',
+)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
new file mode 100644
index 0000000000..5b6c9ec1bf
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <ethdev_pci.h>
+#include <bus_pci_driver.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ int ret = 0;
+
+ eth_dev->dev_ops = NULL;
+
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
+ if (eth_dev->data->mac_addrs == NULL)
+ return -ENOMEM;
+
+ memset(hw, 0, sizeof(*hw));
+ hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
+ if (hw->bar_addr[0] == 0)
+ return -EIO;
+
+ hw->device_id = pci_dev->id.device_id;
+ hw->port_id = eth_dev->data->port_id;
+ hw->eth_dev = eth_dev;
+ hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ hw->is_pf = 0;
+
+ if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID ||
+ pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) {
+ hw->is_pf = 1;
+ }
+
+ return ret;
+}
+
+static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct zxdh_hw),
+ zxdh_eth_dev_init);
+}
+
+static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused)
+{
+ int ret = 0;
+
+ return ret;
+}
+
+static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ int ret = 0;
+
+ ret = zxdh_dev_close(eth_dev);
+
+ return ret;
+}
+
+static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
+
+ return ret;
+}
+
+static const struct rte_pci_id pci_id_zxdh_map[] = {
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_VF_DEVICEID)},
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_PF_DEVICEID)},
+ {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_VF_DEVICEID)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+static struct rte_pci_driver zxdh_pmd = {
+ .id_table = pci_id_zxdh_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = zxdh_eth_pci_probe,
+ .remove = zxdh_eth_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
new file mode 100644
index 0000000000..a11e3624a9
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_ETHDEV_H
+#define ZXDH_ETHDEV_H
+
+#include "ethdev_driver.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* ZXDH PCI vendor/device ID. */
+#define ZXDH_PCI_VENDOR_ID 0x1cf2
+
+#define ZXDH_E310_PF_DEVICEID 0x8061
+#define ZXDH_E310_VF_DEVICEID 0x8062
+#define ZXDH_E312_PF_DEVICEID 0x8049
+#define ZXDH_E312_VF_DEVICEID 0x8060
+
+#define ZXDH_MAX_UC_MAC_ADDRS 32
+#define ZXDH_MAX_MC_MAC_ADDRS 32
+#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
+
+#define ZXDH_NUM_BARS 2
+
+struct zxdh_hw {
+ struct rte_eth_dev *eth_dev;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+
+ uint32_t speed;
+ uint16_t device_id;
+ uint16_t port_id;
+
+ uint8_t duplex;
+ uint8_t is_pf;
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_ETHDEV_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 16540 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 2/9] net/zxdh: add logging implementation
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
` (6 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3417 bytes --]
Add zxdh logging implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++--
drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++
2 files changed, 53 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/zxdh/zxdh_logs.h
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 5b6c9ec1bf..c911284423 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -7,6 +7,7 @@
#include <rte_ethdev.h>
#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL)
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
+ ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
return -ENOMEM;
+ }
memset(hw, 0, sizeof(*hw));
hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
- if (hw->bar_addr[0] == 0)
+ if (hw->bar_addr[0] == 0) {
+ PMD_INIT_LOG(ERR, "Bad mem resource.");
return -EIO;
+ }
hw->device_id = pci_dev->id.device_id;
hw->port_id = eth_dev->data->port_id;
@@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = {
RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE);
diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h
new file mode 100644
index 0000000000..a8a6a3135b
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_logs.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_LOGS_H
+#define ZXDH_LOGS_H
+
+#include <rte_log.h>
+
+extern int zxdh_logtype_init;
+#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init
+#define PMD_INIT_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_driver;
+#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_rx;
+#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx
+#define PMD_RX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_tx;
+#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx
+#define PMD_TX_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+extern int zxdh_logtype_msg;
+#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg
+#define PMD_MSG_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \
+ __func__, __VA_ARGS__)
+
+#endif /* ZXDH_LOGS_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 6146 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
` (5 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 22214 bytes --]
Add device pci init implementation,
to obtain PCI capability and read configuration, etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 43 +++++
drivers/net/zxdh/zxdh_ethdev.h | 18 ++-
drivers/net/zxdh/zxdh_pci.c | 278 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_pci.h | 138 ++++++++++++++++
drivers/net/zxdh/zxdh_queue.h | 109 +++++++++++++
drivers/net/zxdh/zxdh_rxtx.h | 55 +++++++
7 files changed, 641 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/zxdh/zxdh_pci.c
create mode 100644 drivers/net/zxdh/zxdh_pci.h
create mode 100644 drivers/net/zxdh/zxdh_queue.h
create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 932fb1c835..7db4e7bc71 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -15,4 +15,5 @@ endif
sources = files(
'zxdh_ethdev.c',
+ 'zxdh_pci.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index c911284423..5c747882a7 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -8,6 +8,40 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+
+struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+
+static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
+{
+ struct zxdh_hw *hw = eth_dev->data->dev_private;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = 0;
+
+ ret = zxdh_read_pci_caps(pci_dev, hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id);
+ goto err;
+ }
+
+ zxdh_hw_internal[hw->port_id].zxdh_vtpci_ops = &zxdh_dev_pci_ops;
+ zxdh_pci_reset(hw);
+ zxdh_get_pci_dev_config(hw);
+
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]);
+
+ /* If host does not support both status and MSI-X then disable LSC */
+ if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE)
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+ else
+ eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+
+ return 0;
+
+err:
+ PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
+ return ret;
+}
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
hw->is_pf = 1;
}
+ ret = zxdh_init_device(eth_dev);
+ if (ret < 0)
+ goto err_zxdh_init;
+
+ return ret;
+
+err_zxdh_init:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
return ret;
}
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index a11e3624a9..a22ac15065 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -5,6 +5,7 @@
#ifndef ZXDH_ETHDEV_H
#define ZXDH_ETHDEV_H
+#include <rte_ether.h>
#include "ethdev_driver.h"
#ifdef __cplusplus
@@ -24,15 +25,30 @@ extern "C" {
#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS)
#define ZXDH_NUM_BARS 2
+#define ZXDH_RX_QUEUES_MAX 128U
+#define ZXDH_TX_QUEUES_MAX 128U
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
- uint64_t bar_addr[ZXDH_NUM_BARS];
+ struct zxdh_pci_common_cfg *common_cfg;
+ struct zxdh_net_config *dev_cfg;
+ uint64_t bar_addr[ZXDH_NUM_BARS];
+ uint64_t host_features;
+ uint64_t guest_features;
+ uint32_t max_queue_pairs;
uint32_t speed;
+ uint32_t notify_off_multiplier;
+ uint16_t *notify_base;
+ uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint8_t *isr;
+ uint8_t weak_barriers;
+ uint8_t use_msix;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+
uint8_t duplex;
uint8_t is_pf;
};
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
new file mode 100644
index 0000000000..a88d620f30
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <unistd.h>
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_pci.h"
+#include "zxdh_logs.h"
+#include "zxdh_queue.h"
+
+#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \
+ (1ULL << ZXDH_NET_F_MRG_RXBUF | \
+ 1ULL << ZXDH_NET_F_STATUS | \
+ 1ULL << ZXDH_NET_F_MQ | \
+ 1ULL << ZXDH_F_ANY_LAYOUT | \
+ 1ULL << ZXDH_F_VERSION_1 | \
+ 1ULL << ZXDH_F_RING_PACKED | \
+ 1ULL << ZXDH_F_IN_ORDER | \
+ 1ULL << ZXDH_F_NOTIFICATION_DATA | \
+ 1ULL << ZXDH_NET_F_MAC)
+
+static void zxdh_read_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ void *dst,
+ int32_t length)
+{
+ int32_t i = 0;
+ uint8_t *p = NULL;
+ uint8_t old_gen = 0;
+ uint8_t new_gen = 0;
+
+ do {
+ old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+ p = dst;
+ for (i = 0; i < length; i++)
+ *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+ new_gen = rte_read8(&hw->common_cfg->config_generation);
+ } while (old_gen != new_gen);
+}
+
+static void zxdh_write_dev_config(struct zxdh_hw *hw,
+ size_t offset,
+ const void *src,
+ int32_t length)
+{
+ int32_t i = 0;
+ const uint8_t *p = src;
+
+ for (i = 0; i < length; i++)
+ rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint8_t zxdh_get_status(struct zxdh_hw *hw)
+{
+ return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static uint64_t zxdh_get_features(struct zxdh_hw *hw)
+{
+ uint32_t features_lo = 0;
+ uint32_t features_hi = 0;
+
+ rte_write32(0, &hw->common_cfg->device_feature_select);
+ features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+ rte_write32(1, &hw->common_cfg->device_feature_select);
+ features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+ return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
+{
+ rte_write32(0, &hw->common_cfg->guest_feature_select);
+ rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature);
+ rte_write32(1, &hw->common_cfg->guest_feature_select);
+ rte_write32(features >> 32, &hw->common_cfg->guest_feature);
+}
+
+const struct zxdh_pci_ops zxdh_dev_pci_ops = {
+ .read_dev_cfg = zxdh_read_dev_config,
+ .write_dev_cfg = zxdh_write_dev_config,
+ .get_status = zxdh_get_status,
+ .set_status = zxdh_set_status,
+ .get_features = zxdh_get_features,
+ .set_features = zxdh_set_features,
+};
+
+uint16_t zxdh_pci_get_features(struct zxdh_hw *hw)
+{
+ return ZXDH_VTPCI_OPS(hw)->get_features(hw);
+}
+
+void zxdh_pci_reset(struct zxdh_hw *hw)
+{
+ PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id);
+ uint32_t retry = 0;
+
+ ZXDH_VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET);
+ /* Flush status write and wait device ready max 3 seconds. */
+ while (ZXDH_VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) {
+ ++retry;
+ rte_delay_ms(1);
+ }
+ PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
+}
+
+static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
+{
+ uint8_t bar = cap->bar;
+ uint32_t length = cap->length;
+ uint32_t offset = cap->offset;
+
+ if (bar >= PCI_MAX_RESOURCE) {
+ PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+ return NULL;
+ }
+ if (offset + length < offset) {
+ PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length);
+ return NULL;
+ }
+ if (offset + length > dev->mem_resource[bar].len) {
+ PMD_INIT_LOG(ERR, "invalid cap: overflows bar space");
+ return NULL;
+ }
+ uint8_t *base = dev->mem_resource[bar].addr;
+
+ if (base == NULL) {
+ PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+ return NULL;
+ }
+ return base + offset;
+}
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw)
+{
+ struct zxdh_pci_cap cap;
+ uint8_t pos = 0;
+ int32_t ret = 0;
+
+ if (dev->mem_resource[0].addr == NULL) {
+ PMD_INIT_LOG(ERR, "bar0 base addr is NULL");
+ return -1;
+ }
+
+ hw->use_msix = zxdh_pci_msix_detect(dev);
+
+ pos = rte_pci_find_capability(dev, RTE_PCI_CAP_ID_VNDR);
+ while (pos) {
+ ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+ if (ret != sizeof(cap)) {
+ PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret);
+ break;
+ }
+ if (cap.cap_vndr != RTE_PCI_CAP_ID_VNDR) {
+ PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x",
+ pos, cap.cap_vndr);
+ goto next;
+ }
+ PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+ pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+
+ switch (cap.cfg_type) {
+ case ZXDH_PCI_CAP_COMMON_CFG:
+ hw->common_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_NOTIFY_CFG: {
+ ret = rte_pci_read_config(dev, &hw->notify_off_multiplier,
+ 4, pos + sizeof(cap));
+ if (ret != 4)
+ PMD_INIT_LOG(ERR,
+ "failed to read notify_off_multiplier, ret %d", ret);
+ else
+ hw->notify_base = get_cfg_addr(dev, &cap);
+ break;
+ }
+ case ZXDH_PCI_CAP_DEVICE_CFG:
+ hw->dev_cfg = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_ISR_CFG:
+ hw->isr = get_cfg_addr(dev, &cap);
+ break;
+ case ZXDH_PCI_CAP_PCI_CFG: {
+ hw->pcie_id = *(uint16_t *)&cap.padding[1];
+ PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id);
+
+ if ((hw->pcie_id >> 11) & 0x1) /* PF */ {
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u",
+ hw->pcie_id >> 12, (hw->pcie_id >> 8) & 0x7);
+ } else { /* VF */
+ PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u",
+ hw->pcie_id >> 12,
+ (hw->pcie_id >> 8) & 0x7,
+ hw->pcie_id & 0xff);
+ }
+ break;
+ }
+ }
+next:
+ pos = cap.cap_next;
+ }
+ if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+ hw->dev_cfg == NULL || hw->isr == NULL) {
+ PMD_INIT_LOG(ERR, "no zxdh pci device found.");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_pci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length)
+{
+ ZXDH_VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
+{
+ uint64_t guest_features = 0;
+ uint64_t nego_features = 0;
+ uint32_t max_queue_pairs = 0;
+
+ hw->host_features = zxdh_pci_get_features(hw);
+
+ guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
+ nego_features = guest_features & hw->host_features;
+
+ hw->guest_features = nego_features;
+
+ if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
+ zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
+ &hw->mac_addr, RTE_ETHER_ADDR_LEN);
+ } else {
+ rte_eth_random_addr(&hw->mac_addr[0]);
+ }
+
+ zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
+ &max_queue_pairs, sizeof(max_queue_pairs));
+
+ if (max_queue_pairs == 0)
+ hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
+ else
+ hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
+ PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs);
+
+ return 0;
+}
+
+enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev)
+{
+ uint16_t flags = 0;
+ uint8_t pos = 0;
+ int16_t ret = 0;
+
+ pos = rte_pci_find_capability(dev, RTE_PCI_CAP_ID_MSIX);
+
+ if (pos > 0) {
+ ret = rte_pci_read_config(dev, &flags, 2, pos + RTE_PCI_MSIX_FLAGS);
+ if (ret == 2 && flags & RTE_PCI_MSIX_FLAGS_ENABLE)
+ return ZXDH_MSIX_ENABLED;
+ else
+ return ZXDH_MSIX_DISABLED;
+ }
+ return ZXDH_MSIX_NONE;
+}
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
new file mode 100644
index 0000000000..ff656f28e6
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -0,0 +1,138 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_PCI_H
+#define ZXDH_PCI_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#include <bus_pci_driver.h>
+
+#include "zxdh_ethdev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum zxdh_msix_status {
+ ZXDH_MSIX_NONE = 0,
+ ZXDH_MSIX_DISABLED = 1,
+ ZXDH_MSIX_ENABLED = 2
+};
+
+#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
+#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
+#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
+#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */
+#define ZXDH_F_VERSION_1 32
+#define ZXDH_F_RING_PACKED 34
+#define ZXDH_F_IN_ORDER 35
+#define ZXDH_F_NOTIFICATION_DATA 38
+
+#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */
+#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */
+#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */
+#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */
+#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */
+
+/* Status byte for guest to report progress. */
+#define ZXDH_CONFIG_STATUS_RESET 0x00
+#define ZXDH_CONFIG_STATUS_ACK 0x01
+#define ZXDH_CONFIG_STATUS_DRIVER 0x02
+#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04
+#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
+#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
+#define ZXDH_CONFIG_STATUS_FAILED 0x80
+
+struct zxdh_net_config {
+ /* The config defining mac address (if ZXDH_NET_F_MAC) */
+ uint8_t mac[RTE_ETHER_ADDR_LEN];
+ /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */
+ uint16_t status;
+ uint16_t max_virtqueue_pairs;
+ uint16_t mtu;
+ uint32_t speed;
+ uint8_t duplex;
+} __rte_packed;
+
+/* This is the PCI capability header: */
+struct zxdh_pci_cap {
+ uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
+ uint8_t cap_next; /* Generic PCI field: next ptr. */
+ uint8_t cap_len; /* Generic PCI field: capability length */
+ uint8_t cfg_type; /* Identifies the structure. */
+ uint8_t bar; /* Where to find it. */
+ uint8_t padding[3]; /* Pad to full dword. */
+ uint32_t offset; /* Offset within bar. */
+ uint32_t length; /* Length of the structure, in bytes. */
+};
+
+/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */
+struct zxdh_pci_common_cfg {
+ /* About the whole device. */
+ uint32_t device_feature_select; /* read-write */
+ uint32_t device_feature; /* read-only */
+ uint32_t guest_feature_select; /* read-write */
+ uint32_t guest_feature; /* read-write */
+ uint16_t msix_config; /* read-write */
+ uint16_t num_queues; /* read-only */
+ uint8_t device_status; /* read-write */
+ uint8_t config_generation; /* read-only */
+
+ /* About a specific virtqueue. */
+ uint16_t queue_select; /* read-write */
+ uint16_t queue_size; /* read-write, power of 2. */
+ uint16_t queue_msix_vector; /* read-write */
+ uint16_t queue_enable; /* read-write */
+ uint16_t queue_notify_off; /* read-only */
+ uint32_t queue_desc_lo; /* read-write */
+ uint32_t queue_desc_hi; /* read-write */
+ uint32_t queue_avail_lo; /* read-write */
+ uint32_t queue_avail_hi; /* read-write */
+ uint32_t queue_used_lo; /* read-write */
+ uint32_t queue_used_hi; /* read-write */
+};
+
+static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
+{
+ return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+struct zxdh_pci_ops {
+ void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
+ void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
+
+ uint8_t (*get_status)(struct zxdh_hw *hw);
+ void (*set_status)(struct zxdh_hw *hw, uint8_t status);
+
+ uint64_t (*get_features)(struct zxdh_hw *hw);
+ void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+};
+
+struct zxdh_hw_internal {
+ const struct zxdh_pci_ops *zxdh_vtpci_ops;
+};
+
+#define ZXDH_VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].zxdh_vtpci_ops)
+
+extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+extern const struct zxdh_pci_ops zxdh_dev_pci_ops;
+
+void zxdh_pci_reset(struct zxdh_hw *hw);
+void zxdh_pci_read_dev_config(struct zxdh_hw *hw, size_t offset,
+ void *dst, int32_t length);
+
+int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw);
+int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
+
+uint16_t zxdh_pci_get_features(struct zxdh_hw *hw);
+enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_PCI_H */
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
new file mode 100644
index 0000000000..0b6f48adf9
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_QUEUE_H
+#define ZXDH_QUEUE_H
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_rxtx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ **/
+struct zxdh_vring_desc {
+ uint64_t addr; /* Address (guest-physical). */
+ uint32_t len; /* Length. */
+ uint16_t flags; /* The flags as indicated above. */
+ uint16_t next; /* We chain unused descriptors via this. */
+} __rte_packed;
+
+struct zxdh_vring_avail {
+ uint16_t flags;
+ uint16_t idx;
+ uint16_t ring[];
+} __rte_packed;
+
+struct zxdh_vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+} __rte_packed;
+
+struct zxdh_vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+} __rte_packed;
+
+struct zxdh_vring_packed {
+ uint32_t num;
+ struct zxdh_vring_packed_desc *desc;
+ struct zxdh_vring_packed_desc_event *driver;
+ struct zxdh_vring_packed_desc_event *device;
+} __rte_packed;
+
+struct zxdh_vq_desc_extra {
+ void *cookie;
+ uint16_t ndescs;
+ uint16_t next;
+} __rte_packed;
+
+struct zxdh_virtqueue {
+ struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */
+ struct {
+ /**< vring keeping descs and events */
+ struct zxdh_vring_packed ring;
+ uint8_t used_wrap_counter;
+ uint8_t rsv;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ uint16_t rsv1;
+ } __rte_packed vq_packed;
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
+ uint16_t vq_nentries; /**< vring desc numbers */
+ uint16_t vq_free_cnt; /**< num of desc available */
+ uint16_t vq_avail_idx; /**< sync until needed */
+ uint16_t vq_free_thresh; /**< free threshold */
+ uint16_t rsv2;
+
+ void *vq_ring_virt_mem; /**< linear address of vring*/
+ uint32_t vq_ring_size;
+
+ union {
+ struct zxdh_virtnet_rx rxq;
+ struct zxdh_virtnet_tx txq;
+ };
+
+ /** < physical address of vring,
+ * or virtual address
+ **/
+ rte_iova_t vq_ring_mem;
+
+ /**
+ * Head of the free chain in the descriptor table. If
+ * there are no free descriptors, this will be set to
+ * VQ_RING_DESC_CHAIN_END.
+ **/
+ uint16_t vq_desc_head_idx;
+ uint16_t vq_desc_tail_idx;
+ uint16_t vq_queue_index; /**< PCI queue index */
+ uint16_t offset; /**< relative offset to obtain addr in mbuf */
+ uint16_t *notify_addr;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct zxdh_vq_desc_extra vq_descx[];
+} __rte_packed;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_QUEUE_H */
diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h
new file mode 100644
index 0000000000..7d4b5481ec
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_rxtx.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_RXTX_H
+#define ZXDH_RXTX_H
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_mbuf_core.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct zxdh_virtnet_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t errors;
+ uint64_t multicast;
+ uint64_t broadcast;
+ uint64_t truncated_err;
+ uint64_t size_bins[8];
+};
+
+struct zxdh_virtnet_rx {
+ struct zxdh_virtqueue *vq;
+
+ /* dummy mbuf, for wraparound when processing RX ring. */
+ struct rte_mbuf fake_mbuf;
+
+ uint64_t mbuf_initializer; /* value to init mbufs. */
+ struct rte_mempool *mpool; /* mempool for mbuf allocation */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct zxdh_virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate RX ring. */
+} __rte_packed;
+
+struct zxdh_virtnet_tx {
+ struct zxdh_virtqueue *vq;
+ const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */
+ rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */
+ uint16_t queue_id; /* DPDK queue index. */
+ uint16_t port_id; /* Device port identifier. */
+ struct zxdh_virtnet_stats stats;
+ const struct rte_memzone *mz; /* mem zone to populate TX ring. */
+} __rte_packed;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_RXTX_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 50871 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (2 preceding siblings ...)
2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
` (4 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 9083 bytes --]
Add msg channel and hwlock init implementation.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_ethdev.c | 15 +++
drivers/net/zxdh/zxdh_ethdev.h | 1 +
drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 67 ++++++++++++++
5 files changed, 245 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_msg.c
create mode 100644 drivers/net/zxdh/zxdh_msg.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 7db4e7bc71..2e0c8fddae 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -16,4 +16,5 @@ endif
sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
+ 'zxdh_msg.c',
)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 5c747882a7..da454cdff3 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -9,6 +9,7 @@
#include "zxdh_ethdev.h"
#include "zxdh_logs.h"
#include "zxdh_pci.h"
+#include "zxdh_msg.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret < 0)
goto err_zxdh_init;
+ ret = zxdh_msg_chan_init();
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "Failed to init bar msg chan");
+ goto err_zxdh_init;
+ }
+ hw->msg_chan_init = 1;
+
+ ret = zxdh_msg_chan_hwlock_init(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
+ zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
return ret;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index a22ac15065..20ead56e44 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -51,6 +51,7 @@ struct zxdh_hw {
uint8_t duplex;
uint8_t is_pf;
+ uint8_t msg_chan_init;
};
#ifdef __cplusplus
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
new file mode 100644
index 0000000000..9dcf99f1f7
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_spinlock.h>
+#include <rte_cycles.h>
+#include <inttypes.h>
+#include <rte_malloc.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+
+#define ZXDH_REPS_INFO_FLAG_USABLE 0x00
+#define ZXDH_BAR_SEQID_NUM_MAX 256
+
+#define ZXDH_PCIEID_IS_PF_MASK (0x0800)
+#define ZXDH_PCIEID_PF_IDX_MASK (0x0700)
+#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff)
+#define ZXDH_PCIEID_EP_IDX_MASK (0x7000)
+/* PCIEID bit field offset */
+#define ZXDH_PCIEID_PF_IDX_OFFSET (8)
+#define ZXDH_PCIEID_EP_IDX_OFFSET (12)
+
+#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3)
+#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5)
+#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8)
+
+#define ZXDH_MAX_EP_NUM (4)
+#define ZXDH_MAX_HARD_SPINLOCK_NUM (511)
+
+#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
+#define ZXDH_FW_SHRD_OFFSET (0x5000)
+#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
+#define ZXDH_HW_LABEL_OFFSET \
+ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT)
+
+struct zxdh_dev_stat {
+ bool is_mpf_scanned;
+ bool is_res_init;
+ int16_t dev_cnt; /* probe cnt */
+};
+struct zxdh_dev_stat g_dev_stat = {0};
+
+struct zxdh_seqid_item {
+ void *reps_addr;
+ uint16_t id;
+ uint16_t buffer_len;
+ uint16_t flag;
+};
+
+struct zxdh_seqid_ring {
+ uint16_t cur_id;
+ rte_spinlock_t lock;
+ struct zxdh_seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX];
+};
+struct zxdh_seqid_ring g_seqid_ring = {0};
+
+static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+{
+ uint16_t lock_id = 0;
+ uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET;
+ uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET;
+
+ switch (dst) {
+ /* msg to risc */
+ case ZXDH_MSG_CHAN_END_RISC:
+ lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx;
+ break;
+ /* msg to pf/vf */
+ case ZXDH_MSG_CHAN_END_VF:
+ case ZXDH_MSG_CHAN_END_PF:
+ lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx +
+ ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM);
+ break;
+ default:
+ lock_id = 0;
+ break;
+ }
+ if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM)
+ lock_id = 0;
+
+ return lock_id;
+}
+
+static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value)
+{
+ *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value;
+}
+
+static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data)
+{
+ *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
+}
+
+static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
+{
+ label_write((uint64_t)label_addr, virt_lock_id, 0);
+ spinlock_write(virt_addr, virt_lock_id, 0);
+ return 0;
+}
+
+/**
+ * Fun: PF init hard_spinlock addr
+ */
+static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
+{
+ int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
+
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + ZXDH_HW_LABEL_OFFSET);
+ lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
+ zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
+ bar_base_addr + ZXDH_HW_LABEL_OFFSET);
+ return 0;
+}
+
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->is_pf)
+ return 0;
+ return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
+}
+
+static rte_spinlock_t chan_lock;
+int zxdh_msg_chan_init(void)
+{
+ uint16_t seq_id = 0;
+
+ g_dev_stat.dev_cnt++;
+ if (g_dev_stat.is_res_init)
+ return ZXDH_BAR_MSG_OK;
+
+ rte_spinlock_init(&chan_lock);
+ g_seqid_ring.cur_id = 0;
+ rte_spinlock_init(&g_seqid_ring.lock);
+
+ for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) {
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id];
+
+ reps_info->id = seq_id;
+ reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE;
+ }
+ g_dev_stat.is_res_init = true;
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_msg_chan_exit(void)
+{
+ if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0))
+ return ZXDH_BAR_MSG_OK;
+
+ g_dev_stat.is_res_init = false;
+ return ZXDH_BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
new file mode 100644
index 0000000000..a0b46c900a
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_MSG_H
+#define ZXDH_MSG_H
+
+#include <stdint.h>
+
+#include <ethdev_driver.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define ZXDH_BAR0_INDEX 0
+
+enum ZXDH_DRIVER_TYPE {
+ ZXDH_MSG_CHAN_END_MPF = 0,
+ ZXDH_MSG_CHAN_END_PF,
+ ZXDH_MSG_CHAN_END_VF,
+ ZXDH_MSG_CHAN_END_RISC,
+};
+
+enum ZXDH_BAR_MSG_RTN {
+ ZXDH_BAR_MSG_OK = 0,
+ ZXDH_BAR_MSG_ERR_MSGID,
+ ZXDH_BAR_MSG_ERR_NULL,
+ ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */
+ ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */
+ ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */
+ ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */
+ ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
+ ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
+ ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
+ ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
+ ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
+ /**
+ * The sending interface parameter boundary structure pointer is empty
+ */
+ ZXDH_BAR_MSG_ERR_NULL_PARA,
+ ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
+ /**
+ * Unable to find the corresponding message processing function for this module
+ */
+ ZXDH_BAR_MSG_ERR_MODULE_NOEXIST,
+ /**
+ * The virtual address in the parameters passed in by the sending interface is empty
+ */
+ ZXDH_BAR_MSG_ERR_VIRTADDR_NULL,
+ ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */
+ ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED,
+ ZXDH_BAR_MSG_ERR_KERNEL_READY,
+ ZXDH_BAR_MSG_ERR_USR_RET_ERR,
+ ZXDH_BAR_MSG_ERR_ERR_PCIEID,
+ ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */
+};
+
+int zxdh_msg_chan_init(void);
+int zxdh_bar_msg_chan_exit(void);
+int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_MSG_H */
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 17557 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 5/9] net/zxdh: add msg chan enable implementation
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (3 preceding siblings ...)
2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
` (3 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 29161 bytes --]
Add msg chan enable implementation to support
send msg to backend(device side) get infos.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 6 +
drivers/net/zxdh/zxdh_ethdev.h | 12 +
drivers/net/zxdh/zxdh_msg.c | 647 ++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_msg.h | 131 ++++++-
4 files changed, 790 insertions(+), 6 deletions(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index da454cdff3..21255b2190 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_msg_chan_enable(eth_dev);
+ if (ret != 0) {
+ PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret);
+ goto err_zxdh_init;
+ }
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 20ead56e44..7434cc15d7 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -28,10 +28,22 @@ extern "C" {
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+union zxdh_virport_num {
+ uint16_t vport;
+ struct {
+ uint16_t vfid:8;
+ uint16_t pfid:3;
+ uint16_t vf_flag:1;
+ uint16_t epid:3;
+ uint16_t direct_flag:1;
+ };
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ union zxdh_virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
uint64_t host_features;
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 9dcf99f1f7..4f6607fcb0 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -17,6 +17,7 @@
#define ZXDH_REPS_INFO_FLAG_USABLE 0x00
#define ZXDH_BAR_SEQID_NUM_MAX 256
+#define ZXDH_REPS_INFO_FLAG_USED 0xa0
#define ZXDH_PCIEID_IS_PF_MASK (0x0800)
#define ZXDH_PCIEID_PF_IDX_MASK (0x0700)
@@ -33,15 +34,88 @@
#define ZXDH_MAX_EP_NUM (4)
#define ZXDH_MAX_HARD_SPINLOCK_NUM (511)
+#define ZXDH_LOCK_PRIMARY_ID_MASK (0x8000)
+/* bar offset */
+#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000)
+#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000)
#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000)
#define ZXDH_FW_SHRD_OFFSET (0x5000)
#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800)
#define ZXDH_HW_LABEL_OFFSET \
(ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT)
+#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \
+ (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET)
+#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \
+ (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET)
+#define ZXDH_CHAN_RISC_LABEL_OFFSET \
+ (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET)
+#define ZXDH_CHAN_PFVF_LABEL_OFFSET \
+ (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET)
+
+#define ZXDH_REPS_HEADER_LEN_OFFSET 1
+#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4
+#define ZXDH_REPS_HEADER_REPLYED 0xff
+
+#define ZXDH_BAR_MSG_CHAN_USABLE 0
+#define ZXDH_BAR_MSG_CHAN_USED 1
+
+#define ZXDH_BAR_MSG_POL_MASK (0x10)
+#define ZXDH_BAR_MSG_POL_OFFSET (4)
+
+#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc
+#define ZXDH_BAR_MSG_VALID_MASK 1
+#define ZXDH_BAR_MSG_VALID_OFFSET 0
+
+#define ZXDH_BAR_PF_NUM 7
+#define ZXDH_BAR_VF_NUM 256
+#define ZXDH_BAR_INDEX_PF_TO_VF 0
+#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff
+#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0
+#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0
+
+#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000)
+#define ZXDH_SPINLOCK_POLLING_SPAN_US (100)
+
+#define ZXDH_BAR_MSG_SRC_NUM 3
+#define ZXDH_BAR_MSG_SRC_MPF 0
+#define ZXDH_BAR_MSG_SRC_PF 1
+#define ZXDH_BAR_MSG_SRC_VF 2
+#define ZXDH_BAR_MSG_SRC_ERR 0xff
+#define ZXDH_BAR_MSG_DST_NUM 3
+#define ZXDH_BAR_MSG_DST_RISC 0
+#define ZXDH_BAR_MSG_DST_MPF 2
+#define ZXDH_BAR_MSG_DST_PFVF 1
+#define ZXDH_BAR_MSG_DST_ERR 0xff
+
+#define ZXDH_LOCK_TYPE_HARD (1)
+#define ZXDH_LOCK_TYPE_SOFT (0)
+#define ZXDH_BAR_INDEX_TO_RISC 0
+
+#define ZXDH_BAR_CHAN_INDEX_SEND 0
+#define ZXDH_BAR_CHAN_INDEX_RECV 1
+
+uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND},
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV},
+ {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV}
+};
+
+uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF},
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF},
+ {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}
+};
+
+uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
+ {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD},
+ {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD},
+ {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}
+};
+
struct zxdh_dev_stat {
- bool is_mpf_scanned;
- bool is_res_init;
+ uint8_t is_mpf_scanned;
+ uint8_t is_res_init;
int16_t dev_cnt; /* probe cnt */
};
struct zxdh_dev_stat g_dev_stat = {0};
@@ -60,7 +134,9 @@ struct zxdh_seqid_ring {
};
struct zxdh_seqid_ring g_seqid_ring = {0};
-static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
+static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
+
+static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET;
@@ -97,6 +173,33 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da
*(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data;
}
+static uint8_t spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id)
+{
+ return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id);
+}
+
+static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr,
+ uint64_t label_addr, uint16_t primary_id)
+{
+ uint32_t lock_rd_cnt = 0;
+
+ do {
+ /* read to lock */
+ uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id);
+
+ if (spl_val == 0) {
+ label_write((uint64_t)label_addr, virt_lock_id, primary_id);
+ break;
+ }
+ rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US);
+ lock_rd_cnt++;
+ } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES);
+ if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES)
+ return -1;
+
+ return 0;
+}
+
static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr)
{
label_write((uint64_t)label_addr, virt_lock_id, 0);
@@ -109,11 +212,11 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u
*/
static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr)
{
- int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
+ int lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC);
zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
bar_base_addr + ZXDH_HW_LABEL_OFFSET);
- lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
+ lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF);
zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET,
bar_base_addr + ZXDH_HW_LABEL_OFFSET);
return 0;
@@ -159,3 +262,537 @@ int zxdh_bar_msg_chan_exit(void)
g_dev_stat.is_res_init = false;
return ZXDH_BAR_MSG_OK;
}
+
+static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid)
+{
+ struct zxdh_seqid_item *seqid_reps_info = NULL;
+
+ rte_spinlock_lock(&g_seqid_ring.lock);
+ uint16_t g_id = g_seqid_ring.cur_id;
+ uint16_t count = 0;
+ int rc = 0;
+
+ do {
+ count++;
+ ++g_id;
+ g_id %= ZXDH_BAR_SEQID_NUM_MAX;
+ seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id];
+ } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) &&
+ (count < ZXDH_BAR_SEQID_NUM_MAX));
+
+ if (count >= ZXDH_BAR_SEQID_NUM_MAX) {
+ rc = -1;
+ goto out;
+ }
+ seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED;
+ g_seqid_ring.cur_id = g_id;
+ *msgid = g_id;
+ rc = ZXDH_BAR_MSG_OK;
+
+out:
+ rte_spinlock_unlock(&g_seqid_ring.lock);
+ return rc;
+}
+
+static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id)
+{
+ int ret = zxdh_bar_chan_msgid_allocate(msg_id);
+
+ if (ret != ZXDH_BAR_MSG_OK)
+ return ZXDH_BAR_MSG_ERR_MSGID;
+
+ PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id);
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id];
+
+ reps_info->reps_addr = result->recv_buffer;
+ reps_info->buffer_len = result->buffer_len;
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src)
+{
+ uint8_t src_index = 0;
+
+ switch (src) {
+ case ZXDH_MSG_CHAN_END_MPF:
+ src_index = ZXDH_BAR_MSG_SRC_MPF;
+ break;
+ case ZXDH_MSG_CHAN_END_PF:
+ src_index = ZXDH_BAR_MSG_SRC_PF;
+ break;
+ case ZXDH_MSG_CHAN_END_VF:
+ src_index = ZXDH_BAR_MSG_SRC_VF;
+ break;
+ default:
+ src_index = ZXDH_BAR_MSG_SRC_ERR;
+ break;
+ }
+ return src_index;
+}
+
+static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst)
+{
+ uint8_t dst_index = 0;
+
+ switch (dst) {
+ case ZXDH_MSG_CHAN_END_MPF:
+ dst_index = ZXDH_BAR_MSG_DST_MPF;
+ break;
+ case ZXDH_MSG_CHAN_END_PF:
+ dst_index = ZXDH_BAR_MSG_DST_PFVF;
+ break;
+ case ZXDH_MSG_CHAN_END_VF:
+ dst_index = ZXDH_BAR_MSG_DST_PFVF;
+ break;
+ case ZXDH_MSG_CHAN_END_RISC:
+ dst_index = ZXDH_BAR_MSG_DST_RISC;
+ break;
+ default:
+ dst_index = ZXDH_BAR_MSG_SRC_ERR;
+ break;
+ }
+ return dst_index;
+}
+
+static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result)
+{
+ uint8_t src_index = 0;
+ uint8_t dst_index = 0;
+
+ if (in == NULL || result == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null para.");
+ return ZXDH_BAR_MSG_ERR_NULL_PARA;
+ }
+ src_index = zxdh_bar_msg_src_index_trans(in->src);
+ dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+ if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ if (in->payload_addr == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: null message.");
+ return ZXDH_BAR_MSG_ERR_BODY_NULL;
+ }
+ if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len);
+ return ZXDH_BAR_MSG_ERR_LEN;
+ }
+ if (in->virt_addr == 0 || result->recv_buffer == NULL) {
+ PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL.");
+ return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL;
+ }
+ if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET)
+ PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes");
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id)
+{
+ return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL;
+}
+
+static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst);
+ uint16_t chan_id = chan_id_tbl[src_index][dst_index];
+ uint16_t subchan_id = subchan_id_tbl[src_index][dst_index];
+
+ *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == ZXDH_MSG_CHAN_END_RISC)
+ ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET,
+ src_pcieid | ZXDH_LOCK_PRIMARY_ID_MASK);
+ else
+ ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET,
+ src_pcieid | ZXDH_LOCK_PRIMARY_ID_MASK);
+
+ return ret;
+}
+
+static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr)
+{
+ uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst);
+
+ PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid);
+ if (dst == ZXDH_MSG_CHAN_END_RISC)
+ zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET);
+ else
+ zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET,
+ virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET);
+}
+
+static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ int ret = 0;
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+
+ ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr);
+ if (ret != 0)
+ PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid);
+
+ return ret;
+}
+
+static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr)
+{
+ uint8_t src_index = zxdh_bar_msg_src_index_trans(src);
+ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst);
+
+ if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) {
+ PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.");
+ return ZXDH_BAR_MSG_ERR_TYPE;
+ }
+
+ zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr);
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static void zxdh_bar_chan_msgid_free(uint16_t msg_id)
+{
+ struct zxdh_seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ rte_spinlock_lock(&g_seqid_ring.lock);
+ seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE;
+ PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id);
+ rte_spinlock_unlock(&g_seqid_ring.lock);
+}
+
+static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data)
+{
+ uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *(uint32_t *)(subchan_addr + algin_offset) = data;
+ return 0;
+}
+
+static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata)
+{
+ uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK);
+
+ if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) {
+ PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!");
+ return -1;
+ }
+ *pdata = *(uint32_t *)(subchan_addr + algin_offset);
+ return 0;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr,
+ struct zxdh_bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx));
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr,
+ struct zxdh_bar_msg_header *msg_header)
+{
+ uint32_t *data = (uint32_t *)msg_header;
+ uint16_t idx;
+
+ for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++)
+ zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx);
+
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * ix +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ for (ix = 0; ix < remain; ix++)
+ remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix);
+
+ zxdh_bar_chan_reg_write(subchan_addr, 4 * count +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data);
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len)
+{
+ uint32_t *data = (uint32_t *)msg;
+ uint32_t count = (len >> 2);
+ uint32_t ix;
+
+ for (ix = 0; ix < count; ix++)
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * ix +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + ix));
+
+ uint32_t remain = (len & 0x3);
+
+ if (remain) {
+ uint32_t remain_data = 0;
+
+ zxdh_bar_chan_reg_read(subchan_addr, 4 * count +
+ ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data);
+ for (ix = 0; ix < remain; ix++)
+ *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix);
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ data &= (~ZXDH_BAR_MSG_VALID_MASK);
+ data |= (uint32_t)valid_label;
+ zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr,
+ void *payload_addr,
+ uint16_t payload_len,
+ struct zxdh_bar_msg_header *msg_header)
+{
+ uint16_t ret = 0;
+ ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header);
+
+ ret = zxdh_bar_chan_msg_header_get(subchan_addr,
+ (struct zxdh_bar_msg_header *)tmp_msg_header);
+
+ ret = zxdh_bar_chan_msg_payload_set(subchan_addr,
+ (uint8_t *)(payload_addr), payload_len);
+
+ ret = zxdh_bar_chan_msg_payload_get(subchan_addr,
+ tmp_msg_header, payload_len);
+
+ ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED);
+ return ret;
+}
+
+static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK))
+ return ZXDH_BAR_MSG_CHAN_USABLE;
+
+ return ZXDH_BAR_MSG_CHAN_USED;
+}
+
+static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label)
+{
+ uint32_t data;
+
+ zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data);
+ data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK);
+ data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET);
+ zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data);
+ return ZXDH_BAR_MSG_OK;
+}
+
+static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr,
+ uint64_t recv_buffer, uint16_t buffer_len)
+{
+ struct zxdh_bar_msg_header msg_header = {0};
+ uint16_t msg_id = 0;
+ uint16_t msg_len = 0;
+
+ zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header);
+ msg_id = msg_header.msg_id;
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id];
+
+ if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id);
+ return ZXDH_BAR_MSG_ERR_REPLY;
+ }
+ msg_len = msg_header.len;
+
+ if (msg_len > buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u",
+ buffer_len, msg_len + 4);
+ return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN;
+ }
+ uint8_t *recv_msg = (uint8_t *)recv_buffer;
+
+ zxdh_bar_chan_msg_payload_get(subchan_addr,
+ recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len);
+ *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len;
+ *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result)
+{
+ struct zxdh_bar_msg_header msg_header = {0};
+ uint16_t seq_id = 0;
+ uint64_t subchan_addr = 0;
+ uint32_t time_out_cnt = 0;
+ uint16_t valid = 0;
+ int ret = 0;
+
+ ret = zxdh_bar_chan_send_para_check(in, result);
+ if (ret != ZXDH_BAR_MSG_OK)
+ goto exit;
+
+ ret = zxdh_bar_chan_save_recv_info(result, &seq_id);
+ if (ret != ZXDH_BAR_MSG_OK)
+ goto exit;
+
+ zxdh_bar_chan_subchan_addr_get(in, &subchan_addr);
+
+ msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC;
+ msg_header.emec = in->emec;
+ msg_header.usr = 0;
+ msg_header.rsv = 0;
+ msg_header.module_id = in->module_id;
+ msg_header.len = in->payload_len;
+ msg_header.msg_id = seq_id;
+ msg_header.src_pcieid = in->src_pcieid;
+ msg_header.dst_pcieid = in->dst_pcieid;
+
+ ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+ if (ret != ZXDH_BAR_MSG_OK) {
+ zxdh_bar_chan_msgid_free(seq_id);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header);
+
+ do {
+ rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN);
+ valid = zxdh_bar_msg_valid_stat_get(subchan_addr);
+ ++time_out_cnt;
+ } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED));
+
+ if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) {
+ zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ zxdh_bar_chan_msg_poltag_set(subchan_addr, 0);
+ PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out.");
+ ret = ZXDH_BAR_MSG_ERR_TIME_OUT;
+ } else {
+ ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr,
+ (uint64_t)result->recv_buffer, result->buffer_len);
+ }
+ zxdh_bar_chan_msgid_free(seq_id);
+ zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr);
+
+exit:
+ return ret;
+}
+
+static int bar_get_sum(uint8_t *ptr, uint8_t len)
+{
+ uint64_t sum = 0;
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ sum += *(ptr + idx);
+
+ return (uint16_t)sum;
+}
+
+static int zxdh_bar_chan_enable(struct zxdh_msix_para *para, uint16_t *vport)
+{
+ struct zxdh_bar_recv_msg recv_msg = {0};
+ int ret = 0;
+ int check_token = 0;
+ int sum_res = 0;
+
+ if (!para)
+ return ZXDH_BAR_MSG_ERR_NULL;
+
+ struct zxdh_msix_msg msix_msg = {
+ .pcie_id = para->pcie_id,
+ .vector_risc = para->vector_risc,
+ .vector_pfvf = para->vector_pfvf,
+ .vector_mpf = para->vector_mpf,
+ };
+ struct zxdh_pci_bar_msg in = {
+ .virt_addr = para->virt_addr,
+ .payload_addr = &msix_msg,
+ .payload_len = sizeof(msix_msg),
+ .emec = 0,
+ .src = para->driver_type,
+ .dst = ZXDH_MSG_CHAN_END_RISC,
+ .module_id = ZXDH_BAR_MODULE_MISX,
+ .src_pcieid = para->pcie_id,
+ .dst_pcieid = 0,
+ .usr = 0,
+ };
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = &recv_msg,
+ .buffer_len = sizeof(recv_msg),
+ };
+
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+ if (ret != ZXDH_BAR_MSG_OK)
+ return -ret;
+
+ check_token = recv_msg.msix_reps.check;
+ sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg));
+
+ if (check_token != sum_res) {
+ PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token);
+ return ZXDH_BAR_MSG_ERR_REPLY;
+ }
+ *vport = recv_msg.msix_reps.vport;
+ PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id);
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct zxdh_msix_para misx_info = {
+ .vector_risc = ZXDH_MSIX_FROM_RISCV,
+ .vector_pfvf = ZXDH_MSIX_FROM_PFVF,
+ .vector_mpf = ZXDH_MSIX_FROM_MPF,
+ .pcie_id = hw->pcie_id,
+ .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF,
+ .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET),
+ };
+
+ return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index a0b46c900a..bbacbbc45e 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -13,7 +13,22 @@
extern "C" {
#endif
-#define ZXDH_BAR0_INDEX 0
+#define ZXDH_BAR0_INDEX 0
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+
+#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+
+#define ZXDH_BAR_MSG_POLLING_SPAN 100
+#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
+
+#define ZXDH_BAR_CHAN_MSG_SYNC 0
+
+#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */
+#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct zxdh_bar_msg_header))
+#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \
+ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header))
enum ZXDH_DRIVER_TYPE {
ZXDH_MSG_CHAN_END_MPF = 0,
@@ -22,6 +37,13 @@ enum ZXDH_DRIVER_TYPE {
ZXDH_MSG_CHAN_END_RISC,
};
+enum ZXDH_MSG_VEC {
+ ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE,
+ ZXDH_MSIX_FROM_MPF,
+ ZXDH_MSIX_FROM_RISCV,
+ ZXDH_MSG_VEC_NUM,
+};
+
enum ZXDH_BAR_MSG_RTN {
ZXDH_BAR_MSG_OK = 0,
ZXDH_BAR_MSG_ERR_MSGID,
@@ -56,10 +78,117 @@ enum ZXDH_BAR_MSG_RTN {
ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */
};
+enum ZXDH_BAR_MODULE_ID {
+ ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */
+ ZXDH_BAR_MODULE_TBL, /* 1: resource table */
+ ZXDH_BAR_MODULE_MISX, /* 2: config msix */
+ ZXDH_BAR_MODULE_SDA, /* 3: */
+ ZXDH_BAR_MODULE_RDMA, /* 4: */
+ ZXDH_BAR_MODULE_DEMO, /* 5: channel test */
+ ZXDH_BAR_MODULE_SMMU, /* 6: */
+ ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */
+ ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */
+ ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */
+ ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */
+ ZXDH_BAR_MODULE_VPORT, /* 11: get vport */
+ ZXDH_BAR_MODULE_BDF, /* 12: get bdf */
+ ZXDH_BAR_MODULE_RISC_READY, /* 13: */
+ ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */
+ ZXDH_BAR_MDOULE_NVME, /* 15: */
+ ZXDH_BAR_MDOULE_NPSDK, /* 16: */
+ ZXDH_BAR_MODULE_NP_TODO, /* 17: */
+ ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */
+ ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */
+
+ ZXDH_MODULE_FLASH = 32,
+ ZXDH_BAR_MODULE_OFFSET_GET = 33,
+ ZXDH_BAR_EVENT_OVS_WITH_VCB = 36,
+
+ ZXDH_BAR_MSG_MODULE_NUM = 100,
+};
+
+struct zxdh_msix_para {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+ uint64_t virt_addr;
+ uint16_t driver_type; /* refer to DRIVER_TYPE */
+};
+
+struct zxdh_msix_msg {
+ uint16_t pcie_id;
+ uint16_t vector_risc;
+ uint16_t vector_pfvf;
+ uint16_t vector_mpf;
+};
+
+struct zxdh_pci_bar_msg {
+ uint64_t virt_addr; /* bar addr */
+ void *payload_addr;
+ uint16_t payload_len;
+ uint16_t emec;
+ uint16_t src; /* refer to BAR_DRIVER_TYPE */
+ uint16_t dst; /* refer to BAR_DRIVER_TYPE */
+ uint16_t module_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid;
+ uint16_t usr;
+};
+
+struct zxdh_bar_msix_reps {
+ uint16_t pcie_id;
+ uint16_t check;
+ uint16_t vport;
+ uint16_t rsv;
+} __rte_packed;
+
+struct zxdh_bar_offset_reps {
+ uint16_t check;
+ uint16_t rsv;
+ uint32_t offset;
+ uint32_t length;
+} __rte_packed;
+
+struct zxdh_bar_recv_msg {
+ uint8_t reps_ok;
+ uint16_t reps_len;
+ uint8_t rsv;
+ /* */
+ union {
+ struct zxdh_bar_msix_reps msix_reps;
+ struct zxdh_bar_offset_reps offset_reps;
+ } __rte_packed;
+} __rte_packed;
+
+struct zxdh_msg_recviver_mem {
+ void *recv_buffer; /* first 4B is head, followed by payload */
+ uint64_t buffer_len;
+};
+
+struct zxdh_bar_msg_header {
+ uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
+ uint8_t sync : 1;
+ uint8_t emec : 1; /* emergency */
+ uint8_t ack : 1; /* ack msg */
+ uint8_t poll : 1;
+ uint8_t usr : 1;
+ uint8_t rsv;
+ uint16_t module_id;
+ uint16_t len;
+ uint16_t msg_id;
+ uint16_t src_pcieid;
+ uint16_t dst_pcieid; /* used in PF-->VF */
+};
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
+int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
+int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
+ struct zxdh_msg_recviver_mem *result);
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 61773 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (4 preceding siblings ...)
2024-11-01 6:21 ` [PATCH v9 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
` (2 subsequent siblings)
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 12094 bytes --]
Add zxdh get device backend infos,
use msg chan to send msg get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 250 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_common.h | 30 ++++
drivers/net/zxdh/zxdh_ethdev.c | 35 +++++
drivers/net/zxdh/zxdh_ethdev.h | 5 +
drivers/net/zxdh/zxdh_msg.h | 21 +++
6 files changed, 342 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_common.c
create mode 100644 drivers/net/zxdh/zxdh_common.h
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index 2e0c8fddae..a16db47f89 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -17,4 +17,5 @@ sources = files(
'zxdh_ethdev.c',
'zxdh_pci.c',
'zxdh_msg.c',
+ 'zxdh_common.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
new file mode 100644
index 0000000000..34749588d5
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -0,0 +1,250 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <string.h>
+
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+
+#include "zxdh_ethdev.h"
+#include "zxdh_logs.h"
+#include "zxdh_msg.h"
+#include "zxdh_common.h"
+
+#define ZXDH_MSG_RSP_SIZE_MAX 512
+
+#define ZXDH_COMMON_TABLE_READ 0
+#define ZXDH_COMMON_TABLE_WRITE 1
+
+#define ZXDH_COMMON_FIELD_PHYPORT 6
+
+#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2)
+
+#define ZXDH_REPS_HEADER_OFFSET 4
+#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa
+
+struct zxdh_common_msg {
+ uint8_t type; /* 0:read table 1:write table */
+ uint8_t field;
+ uint16_t pcie_id;
+ uint16_t slen; /* Data length for write table */
+ uint16_t reserved;
+} __rte_packed;
+
+struct zxdh_common_rsp_hdr {
+ uint8_t rsp_status;
+ uint16_t rsp_len;
+ uint8_t reserved;
+ uint8_t payload_status;
+ uint8_t rsv;
+ uint16_t payload_len;
+} __rte_packed;
+
+struct zxdh_tbl_msg_header {
+ uint8_t type; /* r/w */
+ uint8_t field;
+ uint16_t pcieid;
+ uint16_t slen;
+ uint16_t rsv;
+};
+struct zxdh_tbl_msg_reps_header {
+ uint8_t check;
+ uint8_t rsv;
+ uint16_t len;
+};
+
+static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ uint8_t type,
+ uint8_t field,
+ void *buff,
+ uint16_t buff_size)
+{
+ uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size;
+
+ desc->payload_addr = rte_zmalloc(NULL, msg_len, 0);
+ if (unlikely(desc->payload_addr == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate msg_data");
+ return -ENOMEM;
+ }
+ memset(desc->payload_addr, 0, msg_len);
+ desc->payload_len = msg_len;
+ struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr;
+
+ msg_data->type = type;
+ msg_data->field = field;
+ msg_data->pcie_id = hw->pcie_id;
+ msg_data->slen = buff_size;
+ if (buff_size != 0)
+ rte_memcpy(msg_data + 1, buff, buff_size);
+
+ return 0;
+}
+
+static int32_t zxdh_send_command(struct zxdh_hw *hw,
+ struct zxdh_pci_bar_msg *desc,
+ enum ZXDH_BAR_MODULE_ID module_id,
+ struct zxdh_msg_recviver_mem *msg_rsp)
+{
+ desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+ desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF;
+ desc->dst = ZXDH_MSG_CHAN_END_RISC;
+ desc->module_id = module_id;
+ desc->src_pcieid = hw->pcie_id;
+
+ msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX;
+ msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0);
+ if (unlikely(msg_rsp->recv_buffer == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate messages response");
+ return -ENOMEM;
+ }
+
+ if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
+ rte_free(msg_rsp->recv_buffer);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp,
+ void *buff, uint16_t len)
+{
+ struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer;
+
+ if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) {
+ PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d",
+ rsp_hdr->payload_status, rsp_hdr->payload_len);
+ return -1;
+ }
+ if (len != 0)
+ rte_memcpy(buff, rsp_hdr + 1, len);
+
+ return 0;
+}
+
+static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_msg_recviver_mem msg_rsp;
+ struct zxdh_pci_bar_msg desc;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT,
+ (void *)phyport, sizeof(*phyport));
+ return ret;
+}
+
+static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ param->pcie_id = hw->pcie_id;
+ param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
+ param->src_type = ZXDH_BAR_MODULE_TBL;
+}
+
+static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len)
+{
+ struct zxdh_pci_bar_msg in = {0};
+ uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0};
+ int ret = 0;
+
+ if (!res || !dev)
+ return ZXDH_BAR_MSG_ERR_NULL;
+
+ struct zxdh_tbl_msg_header tbl_msg = {
+ .type = ZXDH_TBL_TYPE_READ,
+ .field = field,
+ .pcieid = dev->pcie_id,
+ .slen = 0,
+ .rsv = 0,
+ };
+
+ in.virt_addr = dev->virt_addr;
+ in.payload_addr = &tbl_msg;
+ in.payload_len = sizeof(tbl_msg);
+ in.src = dev->src_type;
+ in.dst = ZXDH_MSG_CHAN_END_RISC;
+ in.module_id = ZXDH_BAR_MODULE_TBL;
+ in.src_pcieid = dev->pcie_id;
+
+ struct zxdh_msg_recviver_mem result = {
+ .recv_buffer = recv_buf,
+ .buffer_len = sizeof(recv_buf),
+ };
+ ret = zxdh_bar_chan_sync_msg_send(&in, &result);
+
+ if (ret != ZXDH_BAR_MSG_OK) {
+ PMD_DRV_LOG(ERR,
+ "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ struct zxdh_tbl_msg_reps_header *tbl_reps =
+ (struct zxdh_tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET);
+
+ if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret);
+ return ret;
+ }
+ *len = tbl_reps->len;
+ rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET +
+ sizeof(struct zxdh_tbl_msg_reps_header)), *len);
+ return ret;
+}
+
+static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id)
+{
+ uint8_t reps = 0;
+ uint16_t reps_len = 0;
+
+ if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK)
+ return -1;
+
+ *panel_id = reps;
+ return ZXDH_BAR_MSG_OK;
+}
+
+int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid)
+{
+ struct zxdh_res_para param;
+
+ zxdh_fill_res_para(dev, ¶m);
+ int32_t ret = zxdh_get_res_panel_id(¶m, panelid);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
new file mode 100644
index 0000000000..ba29ca1dad
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef ZXDH_COMMON_H
+#define ZXDH_COMMON_H
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "zxdh_ethdev.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct zxdh_res_para {
+ uint64_t virt_addr;
+ uint16_t pcie_id;
+ uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
+};
+
+int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
+int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* ZXDH_COMMON_H */
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 21255b2190..23af69fece 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -10,9 +10,21 @@
#include "zxdh_logs.h"
#include "zxdh_pci.h"
#include "zxdh_msg.h"
+#include "zxdh_common.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
+uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v)
+{
+ /* epid > 4 is local soft queue. return 1192 */
+ if (v.epid > 4)
+ return 1192;
+ if (v.vf_flag)
+ return v.epid * 256 + v.vfid;
+ else
+ return (v.epid * 8 + v.pfid) + 1152;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
return ret;
}
+static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw)
+{
+ if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get phyport");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport);
+
+ hw->vfid = zxdh_vport_to_vfid(hw->vport);
+
+ if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) {
+ PMD_INIT_LOG(ERR, "Failed to get panel_id");
+ return -1;
+ }
+ PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id);
+
+ return 0;
+}
+
static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
goto err_zxdh_init;
}
+ ret = zxdh_agent_comm(eth_dev, hw);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 7434cc15d7..7b7bb16be8 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -55,6 +55,7 @@ struct zxdh_hw {
uint16_t pcie_id;
uint16_t device_id;
uint16_t port_id;
+ uint16_t vfid;
uint8_t *isr;
uint8_t weak_barriers;
@@ -64,8 +65,12 @@ struct zxdh_hw {
uint8_t duplex;
uint8_t is_pf;
uint8_t msg_chan_init;
+ uint8_t phyport;
+ uint8_t panel_id;
};
+uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index bbacbbc45e..49a7d23014 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -107,6 +107,27 @@ enum ZXDH_BAR_MODULE_ID {
ZXDH_BAR_MSG_MODULE_NUM = 100,
};
+enum ZXDH_RES_TBL_FILED {
+ ZXDH_TBL_FIELD_PCIEID = 0,
+ ZXDH_TBL_FIELD_BDF = 1,
+ ZXDH_TBL_FIELD_MSGCH = 2,
+ ZXDH_TBL_FIELD_DATACH = 3,
+ ZXDH_TBL_FIELD_VPORT = 4,
+ ZXDH_TBL_FIELD_PNLID = 5,
+ ZXDH_TBL_FIELD_PHYPORT = 6,
+ ZXDH_TBL_FIELD_SERDES_NUM = 7,
+ ZXDH_TBL_FIELD_NP_PORT = 8,
+ ZXDH_TBL_FIELD_SPEED = 9,
+ ZXDH_TBL_FIELD_HASHID = 10,
+ ZXDH_TBL_FIELD_NON,
+};
+
+enum ZXDH_TBL_MSG_TYPE {
+ ZXDH_TBL_TYPE_READ,
+ ZXDH_TBL_TYPE_WRITE,
+ ZXDH_TBL_TYPE_NON,
+};
+
struct zxdh_msix_para {
uint16_t pcie_id;
uint16_t vector_risc;
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 26243 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (5 preceding siblings ...)
2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-11-01 6:21 ` [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 24069 bytes --]
configure zxdh intr include risc,dtb. and release intr.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 301 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 6 +
drivers/net/zxdh/zxdh_msg.c | 188 ++++++++++++++++++++
drivers/net/zxdh/zxdh_msg.h | 16 +-
drivers/net/zxdh/zxdh_pci.c | 26 +++
drivers/net/zxdh/zxdh_pci.h | 11 ++
6 files changed, 546 insertions(+), 2 deletions(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 23af69fece..2f8190a428 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -11,6 +11,7 @@
#include "zxdh_pci.h"
#include "zxdh_msg.h"
#include "zxdh_common.h"
+#include "zxdh_queue.h"
struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
@@ -25,6 +26,301 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
+ }
+}
+
+
+static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (rte_intr_ack(dev->intr_handle) < 0)
+ return -1;
+
+ hw->use_msix = zxdh_pci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
+
+ return 0;
+}
+
+static void zxdh_devconf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+
+ if (zxdh_intr_unmask(dev) < 0)
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+}
+
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_fromriscv_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
+
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+/* Interrupt handler triggered by NIC for handling specific interrupt. */
+static void zxdh_frompfvf_intr_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] +
+ ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
+
+ if (hw->is_pf) {
+ PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev);
+ } else {
+ PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler");
+ zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev);
+ }
+}
+
+static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
+
+ tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ /* register callback to update dev config intr */
+ rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
+ /* Register rsic_v to pf interrupt callback */
+ struct rte_intr_handle *tmp = hw->risc_intr +
+ (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+
+ rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
+ tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
+ rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
+}
+
+static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled)
+ return 0;
+
+ zxdh_intr_cb_unreg(dev);
+ if (rte_intr_disable(dev->intr_handle) < 0)
+ return -1;
+
+ hw->intr_enabled = 0;
+ return 0;
+}
+
+static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->intr_enabled) {
+ zxdh_intr_cb_reg(dev);
+ ret = rte_intr_enable(dev->intr_handle);
+ if (unlikely(ret))
+ PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
+
+ hw->intr_enabled = 1;
+ }
+ return ret;
+}
+
+static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ ZXDH_VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
+
+ zxdh_queues_unbind_intr(dev);
+ zxdh_intr_disable(dev);
+
+ rte_intr_efd_disable(dev->intr_handle);
+ rte_intr_vec_list_free(dev->intr_handle);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return 0;
+}
+
+static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint8_t i;
+
+ if (!hw->risc_intr) {
+ PMD_INIT_LOG(ERR, " to allocate risc_intr");
+ hw->risc_intr = rte_zmalloc("risc_intr",
+ ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
+ if (hw->risc_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
+ return -ENOMEM;
+ }
+ }
+
+ for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
+ if (dev->intr_handle->efds[i] < 0) {
+ PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
+ rte_free(hw->risc_intr);
+ hw->risc_intr = NULL;
+ return -1;
+ }
+
+ struct rte_intr_handle *intr_handle = hw->risc_intr + i;
+
+ intr_handle->fd = dev->intr_handle->efds[i];
+ intr_handle->type = dev->intr_handle->type;
+ }
+
+ return 0;
+}
+
+static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (!hw->dtb_intr) {
+ hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
+ if (hw->dtb_intr == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
+ return -ENOMEM;
+ }
+ }
+
+ if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
+ PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
+ rte_free(hw->dtb_intr);
+ hw->dtb_intr = NULL;
+ return -1;
+ }
+ hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
+ hw->dtb_intr->type = dev->intr_handle->type;
+ return 0;
+}
+
+static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t i;
+ uint16_t vec;
+
+ if (!dev->data->dev_conf.intr_conf.rxq) {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ i * 2, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ } else {
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d",
+ i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec);
+ }
+ }
+ /* mask all txq intr */
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw,
+ hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
+ PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x",
+ (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
+ }
+ return 0;
+}
+
+static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ int32_t ret = 0;
+
+ if (!rte_intr_cap_multiple(dev->intr_handle)) {
+ PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
+ return -ENOTSUP;
+ }
+ zxdh_intr_release(dev);
+ uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
+
+ if (dev->data->dev_conf.intr_conf.rxq)
+ nb_efd += dev->data->nb_rx_queues;
+
+ if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
+ PMD_INIT_LOG(ERR, "Fail to create eventfd");
+ return -1;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
+ if (zxdh_setup_risc_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ if (zxdh_setup_dtb_interrupts(dev) != 0) {
+ PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_queues_bind_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
+ ret = -1;
+ goto free_intr_vec;
+ }
+
+ if (zxdh_intr_enable(dev) < 0) {
+ PMD_DRV_LOG(ERR, "interrupt enable failed");
+ ret = -1;
+ goto free_intr_vec;
+ }
+ return 0;
+
+free_intr_vec:
+ zxdh_intr_release(dev);
+ return ret;
+}
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -138,9 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
if (ret != 0)
goto err_zxdh_init;
+ ret = zxdh_configure_intr(eth_dev);
+ if (ret != 0)
+ goto err_zxdh_init;
+
return ret;
err_zxdh_init:
+ zxdh_intr_release(eth_dev);
zxdh_bar_msg_chan_exit();
rte_free(eth_dev->data->mac_addrs);
eth_dev->data->mac_addrs = NULL;
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 7b7bb16be8..65726f3a20 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -7,6 +7,8 @@
#include <rte_ether.h>
#include "ethdev_driver.h"
+#include <rte_interrupts.h>
+#include <eal_interrupts.h>
#ifdef __cplusplus
extern "C" {
@@ -43,6 +45,9 @@ struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
struct zxdh_net_config *dev_cfg;
+ struct rte_intr_handle *risc_intr;
+ struct rte_intr_handle *dtb_intr;
+ struct zxdh_virtqueue **vqs;
union zxdh_virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -59,6 +64,7 @@ struct zxdh_hw {
uint8_t *isr;
uint8_t weak_barriers;
+ uint8_t intr_enabled;
uint8_t use_msix;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c
index 4f6607fcb0..851bab8b36 100644
--- a/drivers/net/zxdh/zxdh_msg.c
+++ b/drivers/net/zxdh/zxdh_msg.c
@@ -95,6 +95,12 @@
#define ZXDH_BAR_CHAN_INDEX_SEND 0
#define ZXDH_BAR_CHAN_INDEX_RECV 1
+#define ZXDH_BAR_CHAN_MSG_SYNC 0
+#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0
+#define ZXDH_BAR_CHAN_MSG_EMEC 1
+#define ZXDH_BAR_CHAN_MSG_NO_ACK 0
+#define ZXDH_BAR_CHAN_MSG_ACK 1
+
uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = {
{ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND},
{ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV},
@@ -136,6 +142,36 @@ struct zxdh_seqid_ring g_seqid_ring = {0};
static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL];
+static inline const char *zxdh_module_id_name(int val)
+{
+ switch (val) {
+ case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG";
+ case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL";
+ case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX";
+ case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA";
+ case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA";
+ case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO";
+ case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU";
+ case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC";
+ case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA";
+ case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM";
+ case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP";
+ case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT";
+ case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF";
+ case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY";
+ case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE";
+ case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME";
+ case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK";
+ case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO";
+ case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF";
+ case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF";
+ case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH";
+ case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET";
+ case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB";
+ default: return "NA";
+ }
+}
+
static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst)
{
uint16_t lock_id = 0;
@@ -796,3 +832,155 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev)
return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
}
+
+static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+
+ return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+}
+
+static void zxdh_bar_msg_ack_async_msg_proc(struct zxdh_bar_msg_header *msg_header,
+ uint8_t *receiver_buff)
+{
+ struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id];
+
+ if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) {
+ PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id);
+ return;
+ }
+ if (msg_header->len > reps_info->buffer_len - 4) {
+ PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u",
+ reps_info->buffer_len, msg_header->len + 4);
+ goto free_id;
+ }
+ uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr;
+
+ rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len);
+ *(uint16_t *)(reps_buffer + 1) = msg_header->len;
+ *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED;
+
+free_id:
+ zxdh_bar_chan_msgid_free(msg_header->msg_id);
+}
+
+zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM];
+static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr,
+ struct zxdh_bar_msg_header *msg_header,
+ uint8_t *receiver_buff, void *dev)
+{
+ uint8_t *reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0);
+
+ if (reps_buffer == NULL)
+ return;
+
+ zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id];
+ uint16_t reps_len = 0;
+
+ recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev);
+ msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK;
+ msg_header->len = reps_len;
+ zxdh_bar_chan_msg_header_set(reply_addr, msg_header);
+ zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len);
+ zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ rte_free(reps_buffer);
+}
+
+static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type,
+ uint8_t dst_type, uint64_t virt_addr)
+{
+ uint8_t src = zxdh_bar_msg_dst_index_trans(src_type);
+ uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type);
+
+ if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR)
+ return 0;
+
+ uint8_t chan_id = chan_id_tbl[dst][src];
+ uint8_t subchan_id = 1 - subchan_id_tbl[dst][src];
+ uint64_t recv_rep_addr;
+
+ if (sync == ZXDH_BAR_CHAN_MSG_SYNC)
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id);
+ else
+ recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id);
+
+ return recv_rep_addr;
+}
+
+static uint16_t zxdh_bar_chan_msg_header_check(struct zxdh_bar_msg_header *msg_header)
+{
+ if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) {
+ PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used.");
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ uint8_t module_id = msg_header->module_id;
+
+ if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE;
+ }
+ uint16_t len = msg_header->len;
+
+ if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) {
+ PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len);
+ return ZXDH_BAR_MSG_ERR_LEN;
+ }
+ if (msg_recv_func_tbl[msg_header->module_id] == NULL) {
+ PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register",
+ zxdh_module_id_name(module_id), module_id);
+ return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST;
+ }
+ return ZXDH_BAR_MSG_OK;
+}
+
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev)
+{
+ struct zxdh_bar_msg_header msg_header = {0};
+ uint64_t recv_addr = 0;
+ uint16_t ret = 0;
+
+ recv_addr = zxdh_recv_addr_get(src, dst, virt_addr);
+ if (recv_addr == 0) {
+ PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst);
+ return -1;
+ }
+
+ zxdh_bar_chan_msg_header_get(recv_addr, &msg_header);
+ ret = zxdh_bar_chan_msg_header_check(&msg_header);
+
+ if (ret != ZXDH_BAR_MSG_OK) {
+ PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret);
+ return -1;
+ }
+
+ uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0);
+ if (recved_msg == NULL) {
+ PMD_MSG_LOG(ERR, "malloc temp buff failed.");
+ return -1;
+ }
+ zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len);
+
+ uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr);
+
+ if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) {
+ zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev);
+ goto exit;
+ }
+ zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE);
+ if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) {
+ zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg);
+ goto exit;
+ }
+ return 0;
+
+exit:
+ rte_free(recved_msg);
+ return ZXDH_BAR_MSG_OK;
+}
diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h
index 49a7d23014..0d032d306c 100644
--- a/drivers/net/zxdh/zxdh_msg.h
+++ b/drivers/net/zxdh/zxdh_msg.h
@@ -13,10 +13,17 @@
extern "C" {
#endif
-#define ZXDH_BAR0_INDEX 0
-#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define ZXDH_BAR0_INDEX 0
+#define ZXDH_CTRLCH_OFFSET (0x2000)
+#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000)
#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1
+#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3
+#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM)
+#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1
+#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1)
+#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM)
+#define ZXDH_QUEUE_INTR_VEC_NUM 256
#define ZXDH_BAR_MSG_POLLING_SPAN 100
#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN)
@@ -202,6 +209,9 @@ struct zxdh_bar_msg_header {
uint16_t dst_pcieid; /* used in PF-->VF */
};
+typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len,
+ void *reps_buffer, uint16_t *reps_len, void *dev);
+
int zxdh_msg_chan_init(void);
int zxdh_bar_msg_chan_exit(void);
int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev);
@@ -210,6 +220,8 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev);
int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in,
struct zxdh_msg_recviver_mem *result);
+int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index a88d620f30..65164c86b7 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -92,6 +92,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features)
rte_write32(features >> 32, &hw->common_cfg->guest_feature);
}
+static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec)
+{
+ rte_write16(vec, &hw->common_cfg->msix_config);
+ return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+ rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+ return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
+{
+ return rte_read8(hw->isr);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -99,8 +117,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_status = zxdh_set_status,
.get_features = zxdh_get_features,
.set_features = zxdh_set_features,
+ .set_queue_irq = zxdh_set_queue_irq,
+ .set_config_irq = zxdh_set_config_irq,
+ .get_isr = zxdh_get_isr,
};
+uint8_t zxdh_pci_isr(struct zxdh_hw *hw)
+{
+ return ZXDH_VTPCI_OPS(hw)->get_isr(hw);
+}
+
uint16_t zxdh_pci_get_features(struct zxdh_hw *hw)
{
return ZXDH_VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index ff656f28e6..f362658ba6 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -22,6 +22,13 @@ enum zxdh_msix_status {
ZXDH_MSIX_ENABLED = 2
};
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define ZXDH_PCI_ISR_INTR 0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define ZXDH_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define ZXDH_MSI_NO_VECTOR 0x7F
+
#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
@@ -110,6 +117,9 @@ struct zxdh_pci_ops {
uint64_t (*get_features)(struct zxdh_hw *hw);
void (*set_features)(struct zxdh_hw *hw, uint64_t features);
+ uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec);
+ uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
+ uint8_t (*get_isr)(struct zxdh_hw *hw);
};
struct zxdh_hw_internal {
@@ -130,6 +140,7 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_pci_get_features(struct zxdh_hw *hw);
enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev);
+uint8_t zxdh_pci_isr(struct zxdh_hw *hw);
#ifdef __cplusplus
}
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 50793 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (6 preceding siblings ...)
2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
2024-11-01 6:21 ` [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 3547 bytes --]
Add support for zxdh infos get.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/zxdh_ethdev.c | 44 +++++++++++++++++++++++++++++++++-
drivers/net/zxdh/zxdh_ethdev.h | 2 ++
2 files changed, 45 insertions(+), 1 deletion(-)
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index 2f8190a428..bbdbda3457 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -26,6 +26,43 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v)
return (v.epid * 8 + v.pfid) + 1152;
}
+static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX);
+ dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX);
+ dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX);
+ dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE;
+ dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN;
+ dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS;
+ dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM);
+ dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER);
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO);
+ dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM);
+
+ return 0;
+}
+
static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
{
struct zxdh_hw *hw = dev->data->dev_private;
@@ -321,6 +358,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+/* dev_ops for zxdh, bare necessities for basic operation */
+static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_infos_get = zxdh_dev_infos_get,
+};
+
static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
{
struct zxdh_hw *hw = eth_dev->data->dev_private;
@@ -377,7 +419,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
struct zxdh_hw *hw = eth_dev->data->dev_private;
int ret = 0;
- eth_dev->dev_ops = NULL;
+ eth_dev->dev_ops = &zxdh_eth_dev_ops;
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 65726f3a20..89c5a9bb5f 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -29,6 +29,8 @@ extern "C" {
#define ZXDH_NUM_BARS 2
#define ZXDH_RX_QUEUES_MAX 128U
#define ZXDH_TX_QUEUES_MAX 128U
+#define ZXDH_MIN_RX_BUFSIZE 64
+#define ZXDH_MAX_RX_PKTLEN 14000U
union zxdh_virport_num {
uint16_t vport;
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 7720 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
* [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
` (7 preceding siblings ...)
2024-11-01 6:21 ` [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
@ 2024-11-01 6:21 ` Junlong Wang
8 siblings, 0 replies; 76+ messages in thread
From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw)
To: dev; +Cc: wang.yong19, Junlong Wang
[-- Attachment #1.1.1: Type: text/plain, Size: 38556 bytes --]
provided zxdh dev configure ops for queue
check,reset,alloc resources,etc.
Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
---
drivers/net/zxdh/meson.build | 1 +
drivers/net/zxdh/zxdh_common.c | 135 ++++++++++
drivers/net/zxdh/zxdh_common.h | 12 +
drivers/net/zxdh/zxdh_ethdev.c | 449 +++++++++++++++++++++++++++++++++
drivers/net/zxdh/zxdh_ethdev.h | 16 ++
drivers/net/zxdh/zxdh_pci.c | 98 +++++++
drivers/net/zxdh/zxdh_pci.h | 26 ++
drivers/net/zxdh/zxdh_queue.c | 123 +++++++++
drivers/net/zxdh/zxdh_queue.h | 172 +++++++++++++
9 files changed, 1032 insertions(+)
create mode 100644 drivers/net/zxdh/zxdh_queue.c
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index a16db47f89..b96aa5a27e 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -18,4 +18,5 @@ sources = files(
'zxdh_pci.c',
'zxdh_msg.c',
'zxdh_common.c',
+ 'zxdh_queue.c',
)
diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
index 34749588d5..9535791c94 100644
--- a/drivers/net/zxdh/zxdh_common.c
+++ b/drivers/net/zxdh/zxdh_common.c
@@ -20,6 +20,7 @@
#define ZXDH_COMMON_TABLE_WRITE 1
#define ZXDH_COMMON_FIELD_PHYPORT 6
+#define ZXDH_COMMON_FIELD_DATACH 3
#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2)
@@ -248,3 +249,137 @@ int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid)
int32_t ret = zxdh_get_res_panel_id(¶m, panelid);
return ret;
}
+
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ uint32_t val = *((volatile uint32_t *)(baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
+ *((volatile uint32_t *)(baseaddr + reg)) = val;
+}
+
+static bool zxdh_try_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ /* check whether lock is used */
+ if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
+ return false;
+
+ return true;
+}
+
+int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us)
+{
+ uint16_t timeout = 0;
+
+ while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ rte_delay_us_block(us);
+ /* acquire hw lock */
+ if (!zxdh_try_lock(hw)) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout);
+ continue;
+ }
+ break;
+ }
+ if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ return 0;
+}
+
+void zxdh_release_lock(struct zxdh_hw *hw)
+{
+ uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG);
+
+ if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
+ var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
+ zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var);
+ }
+}
+
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg)
+{
+ uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg));
+ return val;
+}
+
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val)
+{
+ *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val;
+}
+
+static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field,
+ void *buff, uint16_t buff_size)
+{
+ struct zxdh_pci_bar_msg desc;
+ struct zxdh_msg_recviver_mem msg_rsp;
+ int32_t ret = 0;
+
+ if (!hw->msg_chan_init) {
+ PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
+ return -1;
+ }
+ if (buff_size != 0 && buff == NULL) {
+ PMD_DRV_LOG(ERR, "Buff is invalid");
+ return -1;
+ }
+
+ ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE,
+ field, buff, buff_size);
+
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to fill common msg");
+ return ret;
+ }
+
+ ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp);
+ if (ret != 0)
+ goto free_msg_data;
+
+ ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0);
+ if (ret != 0)
+ goto free_rsp_data;
+
+free_rsp_data:
+ rte_free(msg_rsp.recv_buffer);
+free_msg_data:
+ rte_free(desc.payload_addr);
+ return ret;
+}
+
+int32_t zxdh_datach_set(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t buff_size = (hw->queue_num + 1) * 2;
+ void *buff = rte_zmalloc(NULL, buff_size, 0);
+
+ if (unlikely(buff == NULL)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate buff");
+ return -ENOMEM;
+ }
+ memset(buff, 0, buff_size);
+ uint16_t *pdata = (uint16_t *)buff;
+ *pdata++ = hw->queue_num;
+ uint16_t i;
+
+ for (i = 0; i < hw->queue_num; i++)
+ *(pdata + i) = hw->channel_context[i].ph_chno;
+
+ int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH,
+ (void *)buff, buff_size);
+
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to setup data channel of common table");
+
+ rte_free(buff);
+ return ret;
+}
diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
index ba29ca1dad..a26f0d8d6f 100644
--- a/drivers/net/zxdh/zxdh_common.h
+++ b/drivers/net/zxdh/zxdh_common.h
@@ -14,6 +14,10 @@
extern "C" {
#endif
+#define ZXDH_VF_LOCK_REG 0x90
+#define ZXDH_VF_LOCK_ENABLE_MASK 0x1
+#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10
+
struct zxdh_res_para {
uint64_t virt_addr;
uint16_t pcie_id;
@@ -23,6 +27,14 @@ struct zxdh_res_para {
int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
+uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
+void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
+void zxdh_release_lock(struct zxdh_hw *hw);
+int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us);
+uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg);
+void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val);
+int32_t zxdh_datach_set(struct rte_eth_dev *dev);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
index bbdbda3457..41d2fdfead 100644
--- a/drivers/net/zxdh/zxdh_ethdev.c
+++ b/drivers/net/zxdh/zxdh_ethdev.c
@@ -358,8 +358,457 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
return ret;
}
+static int32_t zxdh_features_update(struct zxdh_hw *hw,
+ const struct rte_eth_rxmode *rxmode,
+ const struct rte_eth_txmode *txmode)
+{
+ uint64_t rx_offloads = rxmode->offloads;
+ uint64_t tx_offloads = txmode->offloads;
+ uint64_t req_features = hw->guest_features;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM);
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) |
+ (1ULL << ZXDH_NET_F_GUEST_TSO6);
+
+ if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ req_features |= (1ULL << ZXDH_NET_F_CSUM);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) |
+ (1ULL << ZXDH_NET_F_HOST_TSO6);
+
+ if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO)
+ req_features |= (1ULL << ZXDH_NET_F_HOST_UFO);
+
+ req_features = req_features & hw->host_features;
+ hw->guest_features = req_features;
+
+ ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features);
+
+ if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) &&
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) {
+ PMD_DRV_LOG(ERR, "rx checksum not available on this host");
+ return -ENOTSUP;
+ }
+
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) {
+ PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+static bool rx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6);
+}
+
+static bool tx_offload_enabled(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) ||
+ vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO);
+}
+
+static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t i = 0;
+
+ const char *type = NULL;
+ struct zxdh_virtqueue *vq = NULL;
+ struct rte_mbuf *buf = NULL;
+ int32_t queue_type = 0;
+
+ if (hw->vqs == NULL)
+ return;
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (!vq)
+ continue;
+
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == ZXDH_VTNET_RQ)
+ type = "rxq";
+ else if (queue_type == ZXDH_VTNET_TQ)
+ type = "txq";
+ else
+ continue;
+ PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
+
+ while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL)
+ rte_pktmbuf_free(buf);
+ }
+}
+
+static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1;
+ uint16_t i = 0;
+ uint16_t j = 0;
+ uint16_t done = 0;
+ int32_t ret = 0;
+
+ ret = zxdh_timedlock(hw, 1000);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout");
+ return -1;
+ }
+
+ /* Iterate COI table and find free channel */
+ for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) {
+ uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t));
+ uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+
+ for (j = base; j < 32; j += 2) {
+ /* Got the available channel & update COI table */
+ if ((var & (1 << j)) == 0) {
+ var |= (1 << j);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+ done = 1;
+ break;
+ }
+ }
+ if (done)
+ break;
+ }
+ zxdh_release_lock(hw);
+ /* check for no channel condition */
+ if (done != 1) {
+ PMD_INIT_LOG(ERR, "NO availd queues");
+ return -1;
+ }
+ /* reruen available channel ID */
+ return (i * 32) + j;
+}
+
+static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ if (hw->channel_context[lch].valid == 1) {
+ PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u",
+ lch, hw->channel_context[lch].ph_chno);
+ return hw->channel_context[lch].ph_chno;
+ }
+ int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch));
+
+ if (pch < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire channel");
+ return -1;
+ }
+ hw->channel_context[lch].ph_chno = (uint16_t)pch;
+ hw->channel_context[lch].valid = 1;
+ PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch);
+ return 0;
+}
+
+static void zxdh_init_vring(struct zxdh_virtqueue *vq)
+{
+ int32_t size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries);
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ virtqueue_disable_intr(vq);
+}
+
+static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx)
+{
+ char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0};
+ char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0};
+ const struct rte_memzone *mz = NULL;
+ const struct rte_memzone *hdr_mz = NULL;
+ uint32_t size = 0;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ struct zxdh_virtnet_rx *rxvq = NULL;
+ struct zxdh_virtnet_tx *txvq = NULL;
+ struct zxdh_virtqueue *vq = NULL;
+ size_t sz_hdr_mz = 0;
+ void *sw_ring = NULL;
+ int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx);
+ int32_t numa_node = dev->device->numa_node;
+ uint16_t vtpci_phy_qidx = 0;
+ uint32_t vq_size = 0;
+ int32_t ret = 0;
+
+ if (hw->channel_context[vtpci_logic_qidx].valid == 0) {
+ PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx);
+ return -EINVAL;
+ }
+ vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno;
+
+ PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d",
+ vtpci_logic_qidx, vtpci_phy_qidx, numa_node);
+
+ vq_size = ZXDH_QUEUE_DEPTH;
+
+ if (ZXDH_VTPCI_OPS(hw)->set_queue_num != NULL)
+ ZXDH_VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size);
+
+ snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx);
+
+ size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct zxdh_vq_desc_extra),
+ RTE_CACHE_LINE_SIZE);
+ if (queue_type == ZXDH_VTNET_TQ) {
+ /*
+ * For each xmit packet, allocate a zxdh_net_hdr
+ * and indirect ring elements
+ */
+ sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region);
+ }
+
+ vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return -ENOMEM;
+ }
+ hw->vqs[vtpci_logic_qidx] = vq;
+
+ vq->hw = hw;
+ vq->vq_queue_index = vtpci_phy_qidx;
+ vq->vq_nentries = vq_size;
+
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ if (queue_type == ZXDH_VTNET_RQ)
+ vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ ZXDH_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(vq_name);
+ if (mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+
+ memset(mz->addr, 0, mz->len);
+
+ vq->vq_ring_mem = mz->iova;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ zxdh_init_vring(vq);
+
+ if (sz_hdr_mz) {
+ snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+ dev->data->port_id, vtpci_phy_qidx);
+ hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
+ numa_node, RTE_MEMZONE_IOVA_CONTIG,
+ RTE_CACHE_LINE_SIZE);
+ if (hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ hdr_mz = rte_memzone_lookup(vq_hdr_name);
+ if (hdr_mz == NULL) {
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+ }
+ }
+
+ if (queue_type == ZXDH_VTNET_RQ) {
+ size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ ret = -ENOMEM;
+ goto fail_q_alloc;
+ }
+
+ vq->sw_ring = sw_ring;
+ rxvq = &vq->rxq;
+ rxvq->vq = vq;
+ rxvq->port_id = dev->data->port_id;
+ rxvq->mz = mz;
+ } else { /* queue_type == VTNET_TQ */
+ txvq = &vq->txq;
+ txvq->vq = vq;
+ txvq->port_id = dev->data->port_id;
+ txvq->mz = mz;
+ txvq->zxdh_net_hdr_mz = hdr_mz;
+ txvq->zxdh_net_hdr_mem = hdr_mz->iova;
+ }
+
+ vq->offset = offsetof(struct rte_mbuf, buf_iova);
+ if (queue_type == ZXDH_VTNET_TQ) {
+ struct zxdh_tx_region *txr = hdr_mz->addr;
+ uint32_t i;
+
+ memset(txr, 0, vq_size * sizeof(*txr));
+ for (i = 0; i < vq_size; i++) {
+ /* first indirect descriptor is always the tx header */
+ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir;
+
+ vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir));
+ start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) +
+ offsetof(struct zxdh_tx_region, tx_hdr);
+ /* length will be updated to actual pi hdr size when xmit pkt */
+ start_dp->len = 0;
+ }
+ }
+ if (ZXDH_VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ return -EINVAL;
+ }
+ return 0;
+fail_q_alloc:
+ rte_free(sw_ring);
+ rte_memzone_free(hdr_mz);
+ rte_memzone_free(mz);
+ rte_free(vq);
+ return ret;
+}
+
+static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq)
+{
+ uint16_t lch;
+ struct zxdh_hw *hw = dev->data->dev_private;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct zxdh_virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vqs");
+ return -ENOMEM;
+ }
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (zxdh_acquire_channel(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to acquire the channels");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ if (zxdh_init_queue(dev, lch) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to alloc virtio queue");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+static int32_t zxdh_dev_configure(struct rte_eth_dev *dev)
+{
+ const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint32_t nr_vq = 0;
+ int32_t ret = 0;
+
+ if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+ return -EINVAL;
+ }
+ if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) {
+ PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!",
+ dev->data->nb_rx_queues, dev->data->nb_tx_queues,
+ ZXDH_QUEUES_NUM_MAX);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+ if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
+ return -EINVAL;
+ }
+
+ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
+ PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
+ return -EINVAL;
+ }
+
+ ret = zxdh_features_update(hw, rxmode, txmode);
+ if (ret < 0)
+ return ret;
+
+ /* check if lsc interrupt feature is enabled */
+ if (dev->data->dev_conf.intr_conf.lsc) {
+ if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+ PMD_DRV_LOG(ERR, "link status not supported by host");
+ return -ENOTSUP;
+ }
+ }
+
+ hw->has_tx_offload = tx_offload_enabled(hw);
+ hw->has_rx_offload = rx_offload_enabled(hw);
+
+ nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues;
+ if (nr_vq == hw->queue_num)
+ return 0;
+
+ PMD_DRV_LOG(DEBUG, "queue changed need reset ");
+ /* Reset the device although not necessary at startup */
+ zxdh_pci_reset(hw);
+
+ /* Tell the host we've noticed this device. */
+ zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_ACK);
+
+ /* Tell the host we've known how to drive the device. */
+ zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER);
+ /* The queue needs to be released when reconfiguring*/
+ if (hw->vqs != NULL) {
+ zxdh_dev_free_mbufs(dev);
+ zxdh_free_queues(dev);
+ }
+
+ hw->queue_num = nr_vq;
+ ret = zxdh_alloc_queues(dev, nr_vq);
+ if (ret < 0)
+ return ret;
+
+ zxdh_datach_set(dev);
+
+ if (zxdh_configure_intr(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to configure interrupt");
+ zxdh_free_queues(dev);
+ return -1;
+ }
+
+ zxdh_pci_reinit_complete(hw);
+
+ return ret;
+}
+
/* dev_ops for zxdh, bare necessities for basic operation */
static const struct eth_dev_ops zxdh_eth_dev_ops = {
+ .dev_configure = zxdh_dev_configure,
.dev_infos_get = zxdh_dev_infos_get,
};
diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h
index 89c5a9bb5f..28e78b0086 100644
--- a/drivers/net/zxdh/zxdh_ethdev.h
+++ b/drivers/net/zxdh/zxdh_ethdev.h
@@ -31,6 +31,13 @@ extern "C" {
#define ZXDH_TX_QUEUES_MAX 128U
#define ZXDH_MIN_RX_BUFSIZE 64
#define ZXDH_MAX_RX_PKTLEN 14000U
+#define ZXDH_QUEUE_DEPTH 1024
+#define ZXDH_QUEUES_BASE 0
+#define ZXDH_TOTAL_QUEUES_NUM 4096
+#define ZXDH_QUEUES_NUM_MAX 256
+#define ZXDH_QUERES_SHARE_BASE (0x5000)
+
+#define ZXDH_MBUF_BURST_SZ 64
union zxdh_virport_num {
uint16_t vport;
@@ -43,6 +50,11 @@ union zxdh_virport_num {
};
};
+struct zxdh_chnl_context {
+ uint16_t valid;
+ uint16_t ph_chno;
+};
+
struct zxdh_hw {
struct rte_eth_dev *eth_dev;
struct zxdh_pci_common_cfg *common_cfg;
@@ -50,6 +62,7 @@ struct zxdh_hw {
struct rte_intr_handle *risc_intr;
struct rte_intr_handle *dtb_intr;
struct zxdh_virtqueue **vqs;
+ struct zxdh_chnl_context channel_context[ZXDH_QUEUES_NUM_MAX];
union zxdh_virport_num vport;
uint64_t bar_addr[ZXDH_NUM_BARS];
@@ -63,6 +76,7 @@ struct zxdh_hw {
uint16_t device_id;
uint16_t port_id;
uint16_t vfid;
+ uint16_t queue_num;
uint8_t *isr;
uint8_t weak_barriers;
@@ -75,6 +89,8 @@ struct zxdh_hw {
uint8_t msg_chan_init;
uint8_t phyport;
uint8_t panel_id;
+ uint8_t has_tx_offload;
+ uint8_t has_rx_offload;
};
uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v);
diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c
index 65164c86b7..165d9f49a3 100644
--- a/drivers/net/zxdh/zxdh_pci.c
+++ b/drivers/net/zxdh/zxdh_pci.c
@@ -110,6 +110,87 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw)
return rte_read8(hw->isr);
}
+static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size)
+{
+ rte_write16(queue_id, &hw->common_cfg->queue_select);
+ rte_write16(vq_size, &hw->common_cfg->queue_size);
+}
+
+static int32_t check_vq_phys_addr_ok(struct zxdh_virtqueue *vq)
+{
+ if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) {
+ PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+ return 0;
+ }
+ return 1;
+}
+
+static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+ rte_write32(val & ((1ULL << 32) - 1), lo);
+ rte_write32(val >> 32, hi);
+}
+
+static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq)
+{
+ uint64_t desc_addr = 0;
+ uint64_t avail_addr = 0;
+ uint64_t used_addr = 0;
+ uint16_t notify_off = 0;
+
+ if (!check_vq_phys_addr_ok(vq))
+ return -1;
+
+ desc_addr = vq->vq_ring_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc);
+ if (vtpci_packed_queue(vq->hw)) {
+ used_addr = RTE_ALIGN_CEIL((avail_addr +
+ sizeof(struct zxdh_vring_packed_desc_event)),
+ ZXDH_PCI_VRING_ALIGN);
+ } else {
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct zxdh_vring_avail,
+ ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN);
+ }
+
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */
+ notify_off = 0;
+ vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+ notify_off * hw->notify_off_multiplier);
+
+ rte_write16(1, &hw->common_cfg->queue_enable);
+
+ return 0;
+}
+
+static void zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq)
+{
+ rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+ io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+ &hw->common_cfg->queue_desc_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+ &hw->common_cfg->queue_avail_hi);
+ io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+ &hw->common_cfg->queue_used_hi);
+
+ rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.read_dev_cfg = zxdh_read_dev_config,
.write_dev_cfg = zxdh_write_dev_config,
@@ -120,6 +201,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = {
.set_queue_irq = zxdh_set_queue_irq,
.set_config_irq = zxdh_set_config_irq,
.get_isr = zxdh_get_isr,
+ .get_queue_num = zxdh_get_queue_num,
+ .set_queue_num = zxdh_set_queue_num,
+ .setup_queue = zxdh_setup_queue,
+ .del_queue = zxdh_del_queue,
};
uint8_t zxdh_pci_isr(struct zxdh_hw *hw)
@@ -146,6 +231,19 @@ void zxdh_pci_reset(struct zxdh_hw *hw)
PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry);
}
+void zxdh_pci_reinit_complete(struct zxdh_hw *hw)
+{
+ zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK);
+}
+
+void zxdh_pci_set_status(struct zxdh_hw *hw, uint8_t status)
+{
+ if (status != ZXDH_CONFIG_STATUS_RESET)
+ status |= ZXDH_VTPCI_OPS(hw)->get_status(hw);
+
+ ZXDH_VTPCI_OPS(hw)->set_status(hw, status);
+}
+
static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap)
{
uint8_t bar = cap->bar;
diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h
index f362658ba6..e86667357b 100644
--- a/drivers/net/zxdh/zxdh_pci.h
+++ b/drivers/net/zxdh/zxdh_pci.h
@@ -29,7 +29,20 @@ enum zxdh_msix_status {
/* Vector value used to disable MSI for queue. */
#define ZXDH_MSI_NO_VECTOR 0x7F
+#define ZXDH_PCI_VRING_ALIGN 4096
+
+#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
+#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
+#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */
#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */
+#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
+#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
+#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
+#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */
+
+#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */
+#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */
+#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */
#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */
#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */
#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */
@@ -53,6 +66,7 @@ enum zxdh_msix_status {
#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08
#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define ZXDH_CONFIG_STATUS_FAILED 0x80
+#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12
struct zxdh_net_config {
/* The config defining mac address (if ZXDH_NET_F_MAC) */
@@ -108,6 +122,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw)
+{
+ return vtpci_with_feature(hw, ZXDH_F_RING_PACKED);
+}
+
struct zxdh_pci_ops {
void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len);
void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len);
@@ -120,6 +139,11 @@ struct zxdh_pci_ops {
uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec);
uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec);
uint8_t (*get_isr)(struct zxdh_hw *hw);
+ uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id);
+ void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size);
+
+ int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq);
+ void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq);
};
struct zxdh_hw_internal {
@@ -141,6 +165,8 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw);
uint16_t zxdh_pci_get_features(struct zxdh_hw *hw);
enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev);
uint8_t zxdh_pci_isr(struct zxdh_hw *hw);
+void zxdh_pci_reinit_complete(struct zxdh_hw *hw);
+void zxdh_pci_set_status(struct zxdh_hw *hw, uint8_t status);
#ifdef __cplusplus
}
diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c
new file mode 100644
index 0000000000..2978a9f272
--- /dev/null
+++ b/drivers/net/zxdh/zxdh_queue.c
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "zxdh_queue.h"
+#include "zxdh_logs.h"
+#include "zxdh_pci.h"
+#include "zxdh_common.h"
+#include "zxdh_msg.h"
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq)
+{
+ struct rte_mbuf *cookie = NULL;
+ int32_t idx = 0;
+
+ if (vq == NULL)
+ return NULL;
+
+ for (idx = 0; idx < vq->vq_nentries; idx++) {
+ cookie = vq->vq_descx[idx].cookie;
+ if (cookie != NULL) {
+ vq->vq_descx[idx].cookie = NULL;
+ return cookie;
+ }
+ }
+ return NULL;
+}
+
+static int32_t zxdh_release_channel(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ uint32_t var = 0;
+ uint32_t addr = 0;
+ uint32_t widx = 0;
+ uint32_t bidx = 0;
+ uint16_t pch = 0;
+ uint16_t lch = 0;
+ int32_t ret = 0;
+
+ ret = zxdh_timedlock(hw, 1000);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout");
+ return -1;
+ }
+
+ for (lch = 0; lch < nr_vq; lch++) {
+ if (hw->channel_context[lch].valid == 0) {
+ PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
+ continue;
+ }
+
+ pch = hw->channel_context[lch].ph_chno;
+ widx = pch / 32;
+ bidx = pch % 32;
+
+ addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
+ var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr);
+ var &= ~(1 << bidx);
+ zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var);
+
+ hw->channel_context[lch].valid = 0;
+ hw->channel_context[lch].ph_chno = 0;
+ }
+
+ zxdh_release_lock(hw);
+
+ return 0;
+}
+
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx)
+{
+ if (vtpci_queue_idx % 2 == 0)
+ return ZXDH_VTNET_RQ;
+ else
+ return ZXDH_VTNET_TQ;
+}
+
+int32_t zxdh_free_queues(struct rte_eth_dev *dev)
+{
+ struct zxdh_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->queue_num;
+ struct zxdh_virtqueue *vq = NULL;
+ int32_t queue_type = 0;
+ uint16_t i = 0;
+
+ if (hw->vqs == NULL)
+ return 0;
+
+ if (zxdh_release_channel(dev) < 0) {
+ PMD_INIT_LOG(ERR, "Failed to clear coi table");
+ return -1;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ vq = hw->vqs[i];
+ if (vq == NULL)
+ continue;
+
+ ZXDH_VTPCI_OPS(hw)->del_queue(hw, vq);
+ queue_type = zxdh_get_queue_type(i);
+ if (queue_type == ZXDH_VTNET_RQ) {
+ rte_free(vq->sw_ring);
+ rte_memzone_free(vq->rxq.mz);
+ } else if (queue_type == ZXDH_VTNET_TQ) {
+ rte_memzone_free(vq->txq.mz);
+ rte_memzone_free(vq->txq.zxdh_net_hdr_mz);
+ }
+
+ rte_free(vq);
+ hw->vqs[i] = NULL;
+ PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
+ }
+
+ rte_free(hw->vqs);
+ hw->vqs = NULL;
+
+ return 0;
+}
diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h
index 0b6f48adf9..683c4e7980 100644
--- a/drivers/net/zxdh/zxdh_queue.h
+++ b/drivers/net/zxdh/zxdh_queue.h
@@ -11,11 +11,30 @@
#include "zxdh_ethdev.h"
#include "zxdh_rxtx.h"
+#include "zxdh_pci.h"
#ifdef __cplusplus
extern "C" {
#endif
+enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 };
+
+#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32
+#define ZXDH_RQ_QUEUE_IDX 0
+#define ZXDH_TQ_QUEUE_IDX 1
+#define ZXDH_MAX_TX_INDIRECT 8
+
+/* This marks a buffer as write-only (otherwise read-only). */
+#define ZXDH_VRING_DESC_F_WRITE 2
+/* This flag means the descriptor was made available by the driver */
+#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7))
+
+#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0
+#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1
+#define ZXDH_RING_EVENT_FLAGS_DESC 0x2
+
+#define ZXDH_VQ_RING_DESC_CHAIN_END 32768
+
/** ring descriptors: 16 bytes.
* These can chain together via "next".
**/
@@ -26,6 +45,19 @@ struct zxdh_vring_desc {
uint16_t next; /* We chain unused descriptors via this. */
} __rte_packed;
+struct zxdh_vring_used_elem {
+ /* Index of start of used descriptor chain. */
+ uint32_t id;
+ /* Total length of the descriptor chain which was written to. */
+ uint32_t len;
+};
+
+struct zxdh_vring_used {
+ uint16_t flags;
+ uint16_t idx;
+ struct zxdh_vring_used_elem ring[];
+} __rte_packed;
+
struct zxdh_vring_avail {
uint16_t flags;
uint16_t idx;
@@ -102,6 +134,146 @@ struct zxdh_virtqueue {
struct zxdh_vq_desc_extra vq_descx[];
} __rte_packed;
+struct zxdh_type_hdr {
+ uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */
+ uint8_t pd_len;
+ uint8_t num_buffers;
+ uint8_t reserved;
+} __rte_packed; /* 4B */
+
+struct zxdh_pi_hdr {
+ uint8_t pi_len;
+ uint8_t pkt_type;
+ uint16_t vlan_id;
+ uint32_t ipv6_extend;
+ uint16_t l3_offset;
+ uint16_t l4_offset;
+ uint8_t phy_port;
+ uint8_t pkt_flag_hi8;
+ uint16_t pkt_flag_lw16;
+ union {
+ struct {
+ uint64_t sa_idx;
+ uint8_t reserved_8[8];
+ } dl;
+ struct {
+ uint32_t lro_flag;
+ uint32_t lro_mss;
+ uint16_t err_code;
+ uint16_t pm_id;
+ uint16_t pkt_len;
+ uint8_t reserved[2];
+ } ul;
+ };
+} __rte_packed; /* 32B */
+
+struct zxdh_pd_hdr_dl {
+ uint32_t ol_flag;
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t dst_vfid;
+ uint32_t svlan_insert;
+ uint32_t cvlan_insert;
+} __rte_packed; /* 16B */
+
+struct zxdh_net_hdr_dl {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_dl pd_hdr; /* 16B */
+} __rte_packed;
+
+struct zxdh_pd_hdr_ul {
+ uint32_t pkt_flag;
+ uint32_t rss_hash;
+ uint32_t fd;
+ uint32_t striped_vlan_tci;
+ /* ovs */
+ uint8_t tag_idx;
+ uint8_t tag_data;
+ uint16_t src_vfid;
+ /* */
+ uint16_t pkt_type_out;
+ uint16_t pkt_type_in;
+} __rte_packed; /* 24B */
+
+struct zxdh_net_hdr_ul {
+ struct zxdh_type_hdr type_hdr; /* 4B */
+ struct zxdh_pi_hdr pi_hdr; /* 32B */
+ struct zxdh_pd_hdr_ul pd_hdr; /* 24B */
+} __rte_packed; /* 60B */
+
+struct zxdh_tx_region {
+ struct zxdh_net_hdr_dl tx_hdr;
+ union {
+ struct zxdh_vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT];
+ struct zxdh_vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT];
+ } __rte_packed;
+};
+
+static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align)
+{
+ size_t size;
+
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct zxdh_vring_packed_desc);
+ size += sizeof(struct zxdh_vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct zxdh_vring_packed_desc_event);
+ return size;
+ }
+
+ size = num * sizeof(struct zxdh_vring_desc);
+ size += sizeof(struct zxdh_vring_avail) + (num * sizeof(uint16_t));
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct zxdh_vring_used) + (num * sizeof(struct zxdh_vring_used_elem));
+ return size;
+}
+
+static inline void vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p,
+ unsigned long align, uint32_t num)
+{
+ vr->num = num;
+ vr->desc = (struct zxdh_vring_packed_desc *)p;
+ vr->driver = (struct zxdh_vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct zxdh_vring_packed_desc));
+ vr->device = (struct zxdh_vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct zxdh_vring_packed_desc_event)), align);
+}
+
+static inline void vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END;
+}
+
+static inline void vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n)
+{
+ int32_t i = 0;
+
+ for (i = 0; i < n; i++) {
+ dp[i].id = (uint16_t)i;
+ dp[i].flags = ZXDH_VRING_DESC_F_WRITE;
+ }
+}
+
+static inline void virtqueue_disable_intr(struct zxdh_virtqueue *vq)
+{
+ if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow;
+ }
+}
+
+struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq);
+int32_t zxdh_free_queues(struct rte_eth_dev *dev);
+int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx);
+
#ifdef __cplusplus
}
#endif
--
2.27.0
[-- Attachment #1.1.2: Type: text/html , Size: 89354 bytes --]
^ permalink raw reply [flat|nested] 76+ messages in thread
end of thread, other threads:[~2024-11-01 6:28 UTC | newest]
Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang
2024-09-24 1:35 ` [v4] " Junlong Wang
2024-09-25 22:39 ` [PATCH v4] " Ferruh Yigit
2024-09-26 6:49 ` [v4] " Junlong Wang
2024-10-07 21:43 ` [PATCH v4] " Stephen Hemminger
2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
2024-10-15 5:44 ` [PATCH v5 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
2024-10-15 5:44 ` [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
2024-10-15 5:44 ` [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-15 15:37 ` Stephen Hemminger
2024-10-15 15:57 ` Stephen Hemminger
2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-16 8:18 ` [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
2024-10-21 8:50 ` Thomas Monjalon
2024-10-21 10:56 ` Junlong Wang
2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
2024-10-21 8:52 ` Thomas Monjalon
2024-10-16 8:18 ` [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-21 8:54 ` Thomas Monjalon
2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-18 5:18 ` [v6,9/9] " Junlong Wang
2024-10-18 6:48 ` David Marchand
2024-10-19 11:17 ` Junlong Wang
2024-10-21 9:03 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Thomas Monjalon
2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang
2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang
2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang
2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
2024-11-01 6:21 ` [PATCH v9 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
2024-11-01 6:21 ` [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-11-01 6:21 ` [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-30 9:01 ` [PATCH v8 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-30 14:55 ` David Marchand
2024-10-30 9:01 ` [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
2024-10-30 9:01 ` [PATCH v8 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
2024-10-30 9:01 ` [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
2024-10-30 9:01 ` [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
2024-10-30 9:01 ` [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-30 9:01 ` [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-22 12:20 ` [PATCH v7 2/9] net/zxdh: add logging implementation Junlong Wang
2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang
2024-10-27 16:47 ` Stephen Hemminger
2024-10-27 16:47 ` Stephen Hemminger
2024-10-22 12:20 ` [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang
2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang
2024-10-26 17:05 ` Thomas Monjalon
2024-10-22 12:20 ` [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang
2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang
2024-10-27 17:07 ` Stephen Hemminger
2024-10-22 12:20 ` [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang
2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang
2024-10-24 11:31 ` [v7,9/9] " Junlong Wang
2024-10-25 9:48 ` Junlong Wang
2024-10-26 2:32 ` Junlong Wang
2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger
2024-10-27 17:03 ` Stephen Hemminger
2024-10-27 16:58 ` Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).